2.3 Module 2 Discussion, homework help

timer Asked: Mar 28th, 2017
account_balance_wallet $10

Question Description

Hi all!

People icons representing discussion among peers.

The field of information security is filled with stories of hackers and crackers who have supposedly "gone good," finding redemption by switching allegiances and using their skills to benefit corporations, governments, and institutions instead of undermining them. The most blatant of these may be Kevin Mitnick, who got caught, served Federal time, and is now out writing books and lecturing about the virtues of "good" security policy from the boardroom of his new company. Frank Abignale -- whose story is told in the book (and movie) "Catch Me If You Can" is another example -- although a very different guy than Mitnick!

Now, start to think about the following questions and discuss with your classmates:

Would you hire a former criminal hacker to work in your business? Can someone with such a background be put to work and trusted? What are the advantages/disadvantages? Is it ethical?

This is a hot topic of debate in the industry. Search the Internet and any other resources you can find to add details to the discussion. For the purposes of debate, consider taking opposing sides.

Oh -- I am not trying to stifle any comments or observations but please do NOT use the argument, "I would hire a 'former criminal hacker' because they are so much smarter than everyone else" -- unless you can back that up with even one documented study that supports the assertion! ~ comment from Course Developer

Discuss and interact regularly with your classmates, bringing the discussion to a close by the end of Week 3.

Refer to the rubric for grading criteria.

Unformatted Attachment Preview

CHAPTER 3 Protocols It is impossible to foresee the consequences of being clever. — Christopher Strachey Every thing secret degenerates, even the administration of justice; nothing is safe that does not show how it can bear discussion and publicity. — Lord Acton 3.1 Introduction If security engineering has a deep unifying theme, it is the study of security protocols. We’ve come across a few protocols informally already — I’ve mentioned challenge-response authentication and Kerberos. In this chapter, I’ll dig down into the details. Rather than starting off with a formal definition of a security protocol, I will give a rough indication and then refine it using a number of examples. As this is an engineering book, I will also give many examples of how protocols fail. A typical security system consists of a number of principals such as people, companies, computers and magnetic card readers, which communicate using a variety of channels including phones, email, radio, infrared, and by carrying data on physical devices such as bank cards and transport tickets. The security protocols are the rules that govern these communications. They are typically designed so that the system will survive malicious acts such as people telling lies on the phone, hostile governments jamming radio, or forgers altering the data on train tickets. Protection against all possible attacks is often too expensive, so protocols are typically designed under certain assumptions about the threats. For example, the logon protocol that consists of a user 63 64 Chapter 3 ■ Protocols entering a password into a machine assumes that she can enter it into the right machine. In the old days of hard-wired terminals in the workplace, this was reasonable; now that people log on to websites over the Internet, it is much less so. Evaluating a protocol thus involves answering two questions: first, is the threat model realistic? Second, does the protocol deal with it? Protocols may be extremely simple, such as swiping a badge through a reader in order to enter a building. They often involve interaction, and do not necessarily involve technical measures like cryptography. For example, when we order a bottle of fine wine in a restaurant, the standard wine-waiter protocol provides some privacy (the other diners at our table don’t learn the price), some integrity (we can be sure we got the right bottle and that it wasn’t switched for, or refilled with, cheap plonk) and non-repudiation (it’s hard for the diner to complain afterwards that the wine was off). Blaze gives other examples from applications as diverse as ticket inspection, aviation security and voting in [185]. At the technical end of things, protocols can be much more complex. The world’s bank card payment system has dozens of protocols specifying how customers interact with cash machines and retail terminals, how a cash machine or terminal talks to the bank that operates it, how the bank communicates with the network operator, how money gets settled between banks, how encryption keys are set up between the various cards and machines, and what sort of alarm messages may be transmitted (such as instructions to capture a card). All these protocols have to work together in a large and complex system. Often a seemingly innocuous design feature opens up a serious flaw. For example, a number of banks encrypted the customer’s PIN using a key known only to their central computers and cash machines, and wrote it to the card magnetic strip. The idea was to let the cash machine verify PINs locally, which saved on communications and even allowed a limited service to be provided when the cash machine was offline. After this system had been used for many years without incident, a programmer (who was playing around with a card reader used in a building access control system) discovered that he could alter the magnetic strip of his own bank card by substituting his wife’s bank account number for his own. He could then take money out of her account using the modified card and his own PIN. He realised that this enabled him to loot any other customer’s account too, and went on to steal hundreds of thousands over a period of years. The affected banks had to spend millions on changing their systems. And some security upgrades can take years; at the time of writing, much of Europe has moved from magnetic-strip cards to smartcards, while America has not. Old and new systems have to work side by side so that European cardholders can buy from American stores and vice versa. This also opens up opportunities for the crooks; clones of European cards are often used in magnetic-strip cash machines in other countries, as the two systems’ protection mechanisms don’t quite mesh. 3.2 Password Eavesdropping Risks So we need to look systematically at security protocols and how they fail. As they are widely deployed and often very badly designed, I will give a number of examples from different applications. 3.2 Password Eavesdropping Risks Passwords and PINs are still the foundation on which much of computer security rests, as they are the main mechanism used to authenticate humans to machines. I discussed their usability and ‘human interface’ problems of passwords in the last chapter. Now let us consider some more technical attacks, of the kind that we have to consider when designing more general protocols that operate between one machine and another. A good case study comes from simple embedded systems, such as the remote control used to open your garage or to unlock the doors of cars manufactured up to the mid-1990’s. These primitive remote controls just broadcast their serial number, which also acts as the password. An attack that became common was to use a ‘grabber’, a device that would record a code broadcast locally and replay it later. These devices, seemingly from Taiwan, arrived on the market in about 1995; they enabled thieves lurking in parking lots to record the signal used to lock a car door and then replay it to unlock the car once the owner had left1 . One countermeasure was to use separate codes for lock and unlock. But this is still not ideal. First, the thief can lurk outside your house and record the unlock code before you drive away in the morning; he can then come back at night and help himself. Second, sixteen-bit passwords are too short. It occasionally happened that people found they could unlock the wrong car by mistake (or even set the alarm on a car whose owner didn’t know he had one [217]). And by the mid-1990’s, devices appeared which could try all possible codes one after the other. A code will be found on average after about 215 tries, which at ten per second takes under an hour. A thief operating in a parking lot with a hundred vehicles within range would be rewarded in less than a minute with a car helpfully flashing its lights. So another countermeasure was to double the length of the password from 16 to 32 bits. The manufacturers proudly advertised ‘over 4 billion codes’. But this only showed they hadn’t really understood the problem. There was still 1 With garage doors it’s even worse. A common chip is the Princeton PT2262, which uses 12 tri-state pins to encode 312 or 531,441 address codes. However implementers often don’t read the data sheet carefully enough to understand tri-state inputs and treat them as binary instead, getting 212 . Many of them only use eight inputs, as the other four are on the other side of the chip. And as the chip has no retry-lockout logic, an attacker can cycle through the combinations quickly and open your garage door after 27 attempts on average. 65 66 Chapter 3 ■ Protocols only one code (or two codes) for each car, and although guessing was now impractical, grabbers still worked fine. Using a serial number as a password has a further vulnerability: there may be many people with access to it. In the case of a car, this might mean all the dealer staff, and perhaps the state motor vehicle registration agency. Some burglar alarms have also used serial numbers as master passwords, and here it’s even worse: the serial number may appear on the order, the delivery note, the invoice and all the other standard commercial paperwork. Simple passwords are sometimes the appropriate technology, even when they double as serial numbers. For example, my monthly season ticket for the swimming pool simply has a barcode. I’m sure I could make a passable forgery with our photocopier and laminating machine, but as the turnstile is attended and the attendants get to know the ‘regulars’, there is no need for anything more expensive. My card keys for getting into the laboratory where I work are slightly harder to forge: the one for student areas uses an infrared barcode, while the card for staff areas has an RFID chip that states its serial number when interrogated over short-range radio. Again, these are probably quite adequate — our more expensive equipment is in rooms with fairly good mechanical door locks. But for things that lots of people want to steal, like cars, a better technology is needed. This brings us to cryptographic authentication protocols. 3.3 Who Goes There? — Simple Authentication A simple example of an authentication device is an infrared token used in some multistorey parking garages to enable subscribers to raise the barrier. This first transmits its serial number and then sends an authentication block consisting of the same serial number, followed by a random number, all encrypted using a key which is unique to the device. We will postpone discussion of how to encrypt data and what properties the cipher should have; we will simply use the notation {X}K for the message X encrypted under the key K. Then the protocol between the access token in the car and the parking garage can be written as: T− → G : T, {T, N}KT This is the standard protocol engineering notation, and can be a bit confusing at first, so we’ll take it slowly. The in-car token sends its name T followed by the encrypted value of T concatenated with N, where N stands for ‘number used once’, or nonce. Everything within the braces is encrypted, and the encryption binds T and N together as well as obscuring their values. The purpose of the nonce is to assure the recipient that the message is fresh, that is, it is not a replay of 3.3 Who Goes There? — Simple Authentication an old message that an attacker observed. Verification is simple: the parking garage server reads T, gets the corresponding key KT, deciphers the rest of the message, checks that the nonce N has not been seen before, and finally that the plaintext contains T (which stops a thief in a car park from attacking all the cars in parallel with successive guessed ciphertexts). One reason many people get confused is that to the left of the colon, T identifies one of the principals (the token which represents the subscriber) whereas to the right it means the name (that is, the serial number) of the token. Another is that once we start discussing attacks on protocols, we can suddenly start finding that the token T’s message intended for the parking garage G was actually intercepted by the freeloader F and played back at some later time. So the notation is unfortunate, but it’s too well entrenched now to change easily. Professionals often think of the T − → G to the left of the colon is simply a hint as to what the protocol designer had in mind. The term nonce can mean anything that guarantees the freshness of a message. A nonce can, according to the context, be a random number, a serial number, a random challenge received from a third party, or even a timestamp. There are subtle differences between these approaches, such as in the level of resistance they offer to various kinds of replay attack, and they increase system complexity in different ways. But in very low-cost systems, the first two predominate as it tends to be cheaper to have a communication channel in one direction only, and cheap devices usually don’t have clocks. Key management in such devices can be very simple. In a typical garage token product, each token’s key is simply its serial number encrypted under a global master key KM known to the central server: KT = {T}KM This is known as key diversification. It’s a common way of implementing access tokens, and is very widely used in smartcard-based systems as well. But there is still plenty of room for error. One old failure mode that seems to have returned is for the serial numbers not to be long enough, so that someone occasionally finds that their remote control works for another car in the car park as well. Having 128-bit keys doesn’t help if the key is derived by encrypting a 16-bit serial number. Weak ciphers also turn up. One token technology used by a number of car makers in their door locks and immobilisers employs a block cipher known as Keeloq, which was designed in the late 1980s to use the minimum number of gates; it consists of a large number of iterations of a simple round function. However in recent years an attack has been found on ciphers of this type, and it works against Keeloq; it takes about an hour’s access to your key to collect enough data for the attack, and then about a day on a PC to process it and recover the embedded cryptographic key [172]. You might not think this a practical attack, as someone who gets access to your key can just drive off with 67 68 Chapter 3 ■ Protocols your car. However, in some implementations, there is also a terrible protocol vulnerability, in that the key diversification is not done using the block cipher itself, but using exclusive-or: KT = T ⊕ KM. So once you have broken a single vehicle key for that type of car, you can immediately work out the key for any other car of that type. The researchers who found this attack suggested ‘Soon, cryptographers will drive expensive cars.’ Indeed protocol vulnerabilities usually give rise to more, and simpler, attacks than cryptographic weaknesses do. At least two manufacturers have made the mistake of only checking that the nonce is different from last time, so that given two valid codes A and B, the series ABABAB... was interpreted as a series of independently valid codes. A thief could open a car by replaying the last-but-one code. A further example comes from the world of prepayment utility meters. Over a million households in the UK, plus many millions in developing countries, have an electricity or gas meter that accepts encrypted tokens; the householder buys a token, takes it home and inserts it into the meter, which then dispenses the purchased quantity of energy. One electricity meter widely used in South Africa checked only that the nonce in the decrypted command was different from last time. So the customer could charge the meter up to the limit by buying two low-value power tickets and then repeatedly feeding them in one after the other [59]. So the question of whether to use a random number or a counter is not as easy as it might seem [316]. If you use random numbers, the lock has to remember a reasonable number of past codes. You might want to remember enough of them to defeat the valet attack. Here, someone who has temporary access to the token — such as a valet parking attendant — can record a number of access codes and replay them later to steal your car. Providing enough nonvolatile memory to remember hundreds or even thousands of old codes might push you to a more expensive microcontroller, and add a few cents to the cost of your lock. If you opt for counters, the problem is synchronization. The key may be used for more than one lock; it may also be activated repeatedly by jostling against something in your pocket (I once took an experimental token home where it was gnawed by my dogs). So there has to be a way to recover after the counter has been incremented hundreds or possibly even thousands of times. This can be turned to advantage by allowing the lock to ‘learn’, or synchronise on, a key under certain conditions; but the details are not always designed thoughtfully. One common product uses a sixteen bit counter, and allows access when the deciphered counter value is the last valid code incremented by no more than sixteen. To cope with cases where the token has been used more than sixteen times elsewhere (or gnawed by a family pet), the lock will open on a second press provided that the counter value has been incremented 3.3 Who Goes There? — Simple Authentication between 17 and 32,767 times since a valid code was entered (the counter rolls over so that 0 is the successor of 65,535). This is fine in many applications, but a thief who can get six well-chosen access codes — say for values 0, 1, 20,000, 20,001, 40,000 and 40,001 — can break the system completely. So you would have to think hard about whether your threat model includes a valet able to get access codes corresponding to chosen counter values, either by patience or by hardware hacking. A recent example of design failure comes from TinyOS, an operating system used in sensor networks based on the IEEE 802.15.4 ad-hoc networking standard. The TinySec library commonly used for security protocols contains not one, but three counters. The first is lost as the radio chip driver overwrites it, the second isn’t remembered by the receiver, and although the third is functional, it’s used for reliability rather than security. So if someone monkeys with the traffic, the outcome is ‘error’ rather than ‘alarm’, and the network will resynchronise itself on a bad counter [340]. So designing even a simple token authentication mechanism is not at all straightforward. There are many attacks that do not involve ‘breaking’ the encryption. Such attacks are likely to become more common as cryptographic authentication mechanisms proliferate, many of them designed by programmers who thought the problem was easy and never bothered to read a book like this one. And there are capable agencies trying to find ways to defeat these remote key entry systems; in Thailand, for example, Muslim insurgents use them to detonate bombs, and the army has responded by deploying jammers [1000]. Another important example of authentication, and one that’s politically contentious for different reasons, is ‘accessory control’. Many printer companies embed authentication mechanisms in printers to ensure that genuine toner cartridges are used. If a competitor’s product is loaded instead, the printer may quietly downgrade from 1200 dpi to 300 dpi, or simply refuse to work at all. Mobile phone vendors make a lot of money from replacement batteries, and now use authentication protocols to spot competitors’ products so they can be blocked or even drained more quickly. All sorts of other industries are getting in on the act; there’s talk in the motor trade of cars that authenticate their major spare parts. I’ll discuss this in more detail in Chapter 22 along with copyright and rights management generally. Suffice it to say here that security mechanisms are used more and more to support business models, by accessory control, rights management, product tying and bundling. It is wrong to assume blindly that security protocols exist to keep ‘bad’ guys ‘out’. They are increasingly used to constrain the lawful owner of the equipment in which they are built; their purpose may be of questionable legality or contrary to public policy. 69 70 Chapter 3 3.3.1 ■ Protocols Challenge and Response Most cars nowadays have remote-controlled door unlocking, though most also have a fallback metal key to ensure that you can still get into your car even if the RF environment is noisy. Many also use a more sophisticated twopass protocol, called challenge-response, to actually authorise engine start. As the car key is inserted into the steering ...
Purchase answer to see full attachment

Tutor Answer

School: Duke University

attached is the assignment. Kindly communicate in the event of any edits.

Tech Support, Hiring Criminal Hackers

Running Head: TECH SUPPORT


Tech Support, Hiring Criminal Hackers
Institutional Affiliation:



Tech Support, Hiring Criminal Hackers
The pool of cyber security professionals as opposed to other professions that are flooded
has experienced a major shortage in relation to the required workforce in the United States. The
government was expected to hire over 10,000 cyber security experts over the years to be ab...

flag Report DMCA

Good stuff. Would use again.

Brown University

1271 Tutors

California Institute of Technology

2131 Tutors

Carnegie Mellon University

982 Tutors

Columbia University

1256 Tutors

Dartmouth University

2113 Tutors

Emory University

2279 Tutors

Harvard University

599 Tutors

Massachusetts Institute of Technology

2319 Tutors

New York University

1645 Tutors

Notre Dam University

1911 Tutors

Oklahoma University

2122 Tutors

Pennsylvania State University

932 Tutors

Princeton University

1211 Tutors

Stanford University

983 Tutors

University of California

1282 Tutors

Oxford University

123 Tutors

Yale University

2325 Tutors