Saturday, November 1, 2008

Cryptography Operational Protocols and Algorithms

Leaving aside for a moment the issues of protocols and algorithms, the first choices to be made are the cryptographic strength of the system, embodied by choices of algorithm and key length, and some of the key-management questions. It is most important to choose truly random keys that are sufficiently long, keep those keys secret, and change keys "often enough." Again, this all supposes that you have selected a good cryptosystem; all the security of the system lies in the keys, and none in the algorithm itself.

We will consider key randomness and key secrecy shortly. For now, let us consider the selection of key length and the frequency of key updates.

Key Length

Given a reasonably strong algorithm, how well the data is protected depends largely on the length of the encryption key. Fundamentally, an encrypted message must remain secret for the useful life of the information. To a large extent, the value of the information in the encrypted message will govern the resources used to attack it. For example, an attacker would be foolish to spend $1 million to obtain information worth $1,000, but he might spend $1 million to obtain a secret worth $2 million. Here are some examples.

Internet 2010

Today, it is common to use 128-bit keys for symmetric algorithms, both for communications security and for the security of data to be protected for 20 years. The necessary key lengths for public-key algorithms vary considerably. The current recommendation for the RSA public-key algorithm, for example, is to use a minimum length of 1024 bits, with 2048 bits used for especially sensitive applications or longterm keys.

Key Updates

Cryptographic keys do not last forever; they need to be updated from time to time. The proper lifetime of a key is a function of the value of the items encrypted, the number of items encrypted, and the lifetime of the items encrypted. We have already discussed lifetime. If a key can be broken by a properly equipped adversary in 2 years, and the lifetime of information encrypted using the key is 6 months, then the key should be changed at least every 18 months so that an attack mounted on the first item encrypted will not succeed until after the last item encrypted loses its value.

The number of items encrypted is an issue for two reasons. First, if individual encrypted items have a market value, then the sum of the values of all encrypted items is the proper measure against which to balance the resources an attacker may bring to bear. Second, some cryptosystems can be attacked more easily when a large body of ciphertext is available. This effect is more difficult to quantify, but again, it is a good idea not to use a key for too long.

Another factor that leads to frequent key updates is paranoia. The longer a key has been in use, the greater the chance that someone has compromised the key storage system and obtained the key by subterfuge rather than by brute force attack.

It is important to note that changing a key does not increase the time that an attacker will need to find it using brute force or any other method of cryptographic attack. Changing keys does, however, limit the amount of information revealed if any particular key is found. For example, if the encryption keys are changed every month, then only one month's worth of information is disclosed if a key is discovered.

Perfect Cryptosystem One-Time Pads

Is there a perfect cryptosystem? Surprisingly, the answer is yes. It is called the onetime pad. The idea of the one-time pad is to have a completely random key that is the same length as the message. The key is never reused, and only the sender and the receiver have copies. To send, for example, a 100-bit message, the message is exclusiveORed' with 100 bits of the key. That portion of the key is crossed off, never to be used again. The receiver reverses the process, exclusive-ORing the ciphertext with her copy of the key to reveal the message. If the one-time pad key contains truly random bits, this scheme is absolutely secure. The attacker does not know what is on the pad and must guess—but there is no way to know when he is right. By changing the guess, the attacker can decode the ciphertext into any message, be it "attack at dawn" or "negotiate surrender."

Internet 2010

The one-time pad offers perfect security and is indeed used when perfect security is needed, but the system has many disadvantages.

  • The pad must be truly random. Any structure at all can be used to break the system. Creating truly random characters is difficult, and creating a vast quantity of them is more difficult.
  • The pad must never be reused. If a sheet is used twice, then the two sections of ciphertext encrypted using the same page can be compared, possibly revealing both.2 Since the pad is consumed as messages are sent, the pad has to be very long or frequently replaced.
  • The pads must be distributed, stored, and used with absolute secrecy. Because the ciphertext cannot be successfully attacked, the obvious point of attack is to copy or substitute a pad.
  • Every pair of correspondents must have a unique pad, leading to immense practical difficulties of distribution.

These practical difficulties effectively restrict the use of one-time pad systems to situations in which cost is no object. For most other situations, cryptosystems are used in which the length of the key is fixed, and the key can be attacked by exhaustive search.

Thursday, June 19, 2008

Putting New Data into SQLSpyNet with the DTS Wizard

The DTS Wizard is a very flexible tool that allows us to import data from almost any data source, including Excel, DB2, Oracle, Access, and even Text files. Why do we need this? Many organizations have data spread throughout the company. Charles will have a spreadsheet of his customers, Mary will have a small Access database that has all her suppliers, and Jenny, the secretary, might have a text file (word document) with the names and phone numbers of all the staff members.

With all of these disparate types of information floating around the office it's pretty obvious that data management can become a real nightmare! Duplication of data is inevitable, and retrieval is almost impossible.

Internet 2010

But here comes SQL Server 2000 to the rescue! With the introduction of DTS in SQL Server version 7.0, many organizations have been able to relieve the pain of trying to get all these disparate pieces of information together. The DTS Wizard is a point-n-shoot approach to import (or even export) data into our database. We go through a series of steps, selecting the database into which to enter the data, where the data is to come from, and so forth.

This is all fine for simple data, but what happens when we have more complex data that doesn't have a clear definition between first name and surname? DTS can even take care of this, with your help of course.

Within a DTS package we can define VBScript that can be used to manipulate, format, or rerun a process. This gives us great flexibility when it comes to altering our data either before, during, or after the insert to the database.

There are many aspects to DTS packages, and we will only have a quick look at the basics. I suggest that you play with DTS packages in SQL Server 2000 because they rock! And the flexibility and control you have over the import/export process will surprise you.

Before we actually insert the data into our database, we are going to delete all the data out of our database. This will allow us to start from a clean slate. But just before we do this though, let's back up our database.

Backing Up the Database Before the Transfer

You might be wishing you didn't have to do this, but do you want to know the cool thing? You do not have to write the backup statement again! Because we wrote a backup task earlier (see Chapter 11, the section titled "Scheduling Jobs"), we can actually force it to run immediately. This means we can create a full database backup just by right-clicking on the job (under Management, SQL Server Agent, and then Jobs), and selecting Start Job.

You will see that the status of the job is set to executing. When the job has finished, the status will change to either Succeeded or Failed.

So there we go, a backup done nice and simply, with no extra code!

Monday, June 16, 2008

Debugging Stored Procedures

I have performed many tasks in numerous different roles, and one of the most frustrating has been debugging stored procedures. But no more! The ability to debug stored procedures as though we were debugging any development platform code is part of one of the enhancements to SQL Server 2000's Query Analyzer. We can insert break points, step into, step over, and so forth. This is wonderful for those of us who have tried to monitor what is happening in a stored procedure.

Previously we could do this with SQL Server 7.0 and Visual InterDev, but there was a lot of overhead in setting it up. With the new debugging tools, all we do is right- click and select Debug. How simple is that?

We have the standard debugging windows as well. We can get the values of variables from the Watch window and view the procedures that have been called and not completed in the Callstack window. This makes it easy to migrate from a development environment to using SQL Server 2000. So you Access developers out there must be getting really excited by now!

Internet 2010

What? No More Room?

One of the trouble spots a DBA must keep an eye on is conserving a computer's most precious resources, memory and disk space.

If we have several databases on one server, we can find that we run out of disk space, and if that happens, our databases will fail.

Of course, in Spy Net's fictional scenario, that could mean World War III! But in the real world, running out of space can still cause serious problems, especially in mission-critical databases such as utilities or emergency response systems. In this section, we look at the causes of resource failure and several ways to avoid down time, including managing file and log size.

How Memory Affects Database Transactions

The memory-deprived databases will fail because tempdb is where most of the changes that you make to your data are performed before they are committed to disk. If you have enough RAM available, SQL Server 2000 will put as much of your database as it can up into RAM. After all, it is much faster to read from RAM than to scan a disk for the information. However, if RANI is a short commodity or you have concerns about the amount of disk space your database is eating, relax, because we have even have control over that.

When you are in Enterprise Manager you have the option to view how much space your data files for your database are allocated and how much is used. To see this information, simply click the SQLSpyNet (or any other) database within Enterprise Manager, and you will see a screen.

Shrinking Your Data Files to Reduce the Database

When we are talking about shrinking our data files we are not actually referring to the process of compacting them like a zip program would.

If we shrink our data files, we remove unused data pages. For example, if we had a table that had five data pages on which it stored the data, and we deleted two pages worth of the data, although our table would have only three pages that actually stored data, SQL Server 2000 still would have five pages allocated to the table.

When we shrink the data files, we just get rid of the extra two pages that the table was using. This does, however, have restrictions, but I think you get the idea.

What do we do when our data files are too large? Although we cannot shrink an entire database smaller than its original size, we can shrink our data files smaller than their original allocation sizes. We must do this by shrinking each data file individually by using the DBCC SHR I NKF I LE Transact-SQL statement. This allows us to reallocate how much space the given data file is allowed to use.

Sunday, May 4, 2008

BadArticle Article Rewriter

BadArticle Article Rewriter ( is a unique article rewriter which allows you to create two or more of the same article but with different synonyms and sentence structures. This helps your SEO go up dramatically as you can therefore have hundreds of “different” articles on the same topic which all have the same keywords. It is a brilliantly original idea and can save you hundreds of dollars paying people to rewrite your articles. It also means you can write using the same adjective all the time and the program will change it for you into intriguing words. Without spending any money, you could have tons of freshly exclusive content for people to read on your website!

We decided to try this cool program out and found it is as top notch as paying someone to rewrite it! It is a very easy tool to use and we think that this idea will be the way forward.

Internet 2010

So, how does this new program work? Well, there are three different levels of rewriting, depending how different you want your article to be and the computer changes the synonyms and sentences to make new content (i.e. simple becomes plain).

Bad Rewrite

This only changes the most changeable synonyms. This means that your article will still be very legible and mistakes made by the computer aren't common.

Worse Rewrite

This changes most synonyms and therefore your article. It is very hard to tell the articles originated from one article. The database for this is huge, so don't worry about too many of just one word. This however, is slightly more illegible than just replacing the easiest to substitute.

Worst Rewrite

This completely replaces sentences but keeps your keywords. Sometimes this means the sentence turns out illegible, but it is usually a safe and quick way to get new content. (You may want to check it first for 100% accuracy)

Well, now you know how it works and how you can create tons of articles quickly and easily, or just spice up an existing article! We tested it with great results, as you can clearly see by the sample underneath.

So, here's the first sentence of this article rewritten:

BadArticle Article Rewriter is an exceptional article rewriter which permits you to create two or more of the identical article but with varied synonyms and sentence arrangements.”

Although grammatically the sentence is not 100% correct, it still contains all the keywords and therefore boosts your SEO. In addition, if you wanted the new article to have top grammar, it will only take a second in Microsoft Word that integrated in the rewrite tool.

Personally, we decided that this is a very useful tool for bulk rewriting and content creation, but still found that we needed to correct some small errors. Therefore, for the versions of the future, maybe there will be a simple grammar checker just like in Microsoft Word.

Our final verdict on this program: Overuse it. It's the only free version of an article rewriter which we know of, plus it rewrites in less than 10 seconds! Why pay for hundreds of people's time when you have this powerful resource right in front of your very eyes!

Saturday, April 26, 2008

Protocol Negotiation and Session Setup continue…

Readingand Writing

The SMB protocol uses the READ and WRITE message types to perform I/O operations on a file for the client. Using the READ request, a client can request that the server return information from the file by specifying a number of bytes and an offset into the file. The server returns the data, indicating the actual number of bytes returned, which can be less than requested if the user tries to read past the end of a file.

The WRITE command updates a file in a similar manner. The client sends in the data that will be written, indicating the number of bytes to write and an offset into the file where the write operation will begin. If the request causes a write past the end of the file, the file is extended to make it larger. The server sends a response telling the client the number of bytes that were written. If the number is less than the requested value, an error has occurred.

Internet 2010

To increase read/write performance, the READ_RAW and WRITE_RAW message types can be used to exchange much larger blocks of information between the client and the server. When these are used, the client must have only one request issued to the server. In one send, the server will respond with data that can be as many as 65,535 bytes in length. The WRITE command works in the opposite direction, allowing the client to send a large buffer of raw data to the server for a write operation.

Locking Mechanisms

Locking allows a particular client exclusive access to a file or a part of a file when it is shared on th network. In SMB, the capability to create a lock is called an opportunistic lock, or oplock for short. This is better explained by looking at the way in which it works. A client can create a lock on a resource using three kinds of locks. The first is an exclusive lock, in which the client has exclusive access to the data held by the lock. A batch oplock is one that is kept open by the server when the client process has already closed the file. A Level II oplock is one in which there can be multiple readers of the same file.

The locking process consists of the client requesting the type of lock it wants when it opens the file. The server replies to the client with the type of lock that was granted when it responds to the open request.

A lock gives the client the capability to efficiently manage buffer space it uses when accessing a file over the network. For example, if a client has exclusive access to a file and is performing writes to it, it can buffer a lot of the newly written information before having to send it to the server to update the file. This can provide a reduced number of network packets when updating a file. A client that has an exclusive lock on a file can also buffer read-ahead data to make reading a file much faster.

These locks are called opportunistic locks for a reason. A client can be granted exclusive access to a file if no other client has it open at the time of the request. What happens when another client needs to read the file? The server notifies the first client that it needs to break the exclusive lock. The client then flushes its buffers so that any data that has not been written to the file is processed. The client then sends an acknowledgment to the server that it recognizes that the exclusive lock has been broken.

Batch oplocks are used to reduce the amount of traffic on the network when some programs require continual reopening of a file to obtain commands, as when a batch command procedure is executed.

For example, a batch procedure executed by the command processor usually opens a file, locates the next line to be executed, reads that line, closes the file, and then executes the command. The problem with this is that these steps are taken for each command line in the procedure, resulting in multiple file open/closes that are not really necessary.

This procedure for reading individual lines from a file is done by using a batch oplock whereby the client can read the data from its local read-ahead cache instead of reopening the file on the remote server to get each line.

Level II oplocks were new with the NT changes to SMB. This kind of lock allows more than one client to have a file opened for reading. When a client must read from a file that is opened by another exclusively, the server informs the current client that its exclusive lock has been broken and is now a Level II oplock. No client that has a Level II oplock will buffer data to or from the file. Thus, after the lock has changed to a Level II oplock (and the first client has flushed any data in its buffers), both clients can continue reading the file.

Friday, April 25, 2008

Protocol Negotiation and Session Setup

SMB has a built-in mechanism that is used by the client and server to determine the other's capabilities so that a common protocol version can be established that the two will use for the network con• nection. The first SMB message that the client sends to the server is one of the SMB_COM_NEGOTIATE type. The client uses this message to send the server a list of the dialects it understands. The server selects the most recent dialect it understands from the client's list and returns a message to it.

The response the server returns depends on the type of client. The information includes the dialect selected and can include additional information, such as buffer sizes, supported access modes, time and date values, and security information. After the client receives this response, it can continue to set up the session by using the SESSION_SETUP_ANDX message type.

Internet 2010

If the initial server response indicates that user-level security is being used, this message type can be used to perform a user logon. The client sets a value in the message header called the UID (user ID) for the account it wants to use. It also supplies the account name and password to the server by using this message type. If these values are validated by the server, the user can continue to use the UID to make subsequent accesses.

Other setup functions that are performed by using SESSION_SETUP_ANDX include the following:

  • Set the maximum values for the size of buffers that will be used in the message exchange.
  • Set the maximum number of client requests that can be outstanding at the server.
  • Set the virtual circuit (VC) number.

If the VC passed to the server is zero and the server has other circuits open for the client, it will abort those services, assuming that the client has rebooted without freeing those services first. To properly close a session, the client uses the message type LOGOFF_ANDX, which causes the server to close all files associated with the user's UID.

Accessing Files

Other SMB message types are used to traverse the resource directory and to open, read, write, and close files. First, the user must connect to the resource by using the TREE_CONNECT message. The mes‑

sage includes the name of the resource (server and share name) and, for earlier clients that do not perform logons, a shared password. The server responds by sending the user a value called the TID (Tree ID), which will be used in SMBs exchanged for this connection.

After the connection has been established, several basic SMB command formats can be used to manipulate files and directories that reside on the share. For example, the CREATE_DIRECTORY message is used to create a new directory in the file share's directory structure. The client passes the pathname for the new directory, and the server creates the directory, provided that the client has the appropriate access rights or permissions. The DELETE_DIRECTORY SMB message can be used to remove a directory, again based on the functions allowed for the username.

Opening and Closing Files

The OPEN message is used by a client to open a file. The path for the file is given, relative to the file share root. The client specifies the access that is desired, such as read, write, or share. If the file is successfully opened, the server returns a File ID (FID) to the client, which is used to further access the file using other SMB message types; it is similar to a file handle, which most programmers will recognize.

The server also returns data to the client indicating the actual access that was granted, which is read- only, write-only, or read/write. The CLOSE message is sent by the client to tell the server to release any locks held on the resource fileheld by the client. After this message, the client can no longer use the FID to access the file, but it must instead reopen the file and obtain a new value.

When a client does not know the exact name of a file that it wants to open, the SEARCH message can be used to perform a directory lookup. This function enables wildcards to be used, and the server response can include more than one filename that matches the request.

Internet Blogosphere