Part seven of the series detailing the OWASP top 10 web application vulnerabilities with a focus on password hashing. (See intro)

"Insecure cryptographic storage" relates to a number of aspects, but I think that it can be broken down to two main areas: Encryption and Hashing.

As these are similar in some respects and are often both used together, there's a bit of confusion around what they are.

Firstly, encryption uses a mathematical formula to transform human readable data into an unreadable form by means of a key. Often encryption is a symmetric process. That is, the same (or trivial) key is used to lock (encrypt) the data as to unlock the data. This differs from asymmetric (or public-key) encryption where there are two different keys employed - One for locking and the other for unlocking. One constant in encryption is that there is a key which must be kept safe. This key is employed by means of a sequence of data and may be stored in a file on the server if needed continuously by a computer program. This obviously implies that anyone who can break into the server and get access to the key can unlock all the data.

Hashing is similar to encryption in that it transforms data from a human readable form into an unreadable form via a mathematical function. The primary difference between the two is that hashing is only a one way function. In other words, given the hash (or resultant) code, nobody should be able to work out the original data. An example of a hash value for the entered phrase "test" using the Md5 hash algorithm is: 098f6bcd4621d373cade4e832627b4f6

If you can't ever retrieve the original data what use is this? One of the common uses is for securing passwords. The way in which it works can be explained by means of an account registration and login example. Upon account creation, the password is hashed, thus giving a block of unreadable data. This is then stored in the database as the "password". When the user enters their password during the login process, the entered password is once again hashed and then the two hashed values are compared. If they are the same, the user entered a valid password. Note how the password in human readable form is never needed to determine if the user has entered the correct password. So, even if a hacker got access to the system's source code and the hashed passwords, due to the fact that a hashed password can't be reversed, it is theoretically impossible to crack someone's password. Not quite...

There are a number of techniques employed in cracking passwords. Firstly dictionary attacks take a dictionary of words and try each one sequentially until a match is found. They would also try combinations of words, or words with prefixed and/or appended numbers. As it is much simpler to remember a name or word, people invariably choose simple passwords and therefore the dictionary attack is amazingly effective. This highlights the importance of having a minimum strength password policy in place, forcing a user to select a password with a combination of uppercase letters, punctuation and both alpha and numeric values therein.      

The other widely used approach is the use of rainbow tables. Basically, a hacker has a stored table of data which in essence contains two things; passwords and the hashed value for each password. Additionally, these hashed values are indexed in the database which makes it very quick to simply look up a given hash value and determine the corresponding password. This approach uses the time/memory trade-off as these tables are very large but allow much quicker cracking of hash values.

As an example of how easy this can be, using the hash example above, the hash value 098f6bcd4621d373cade4e832627b4f6 can be "broken" in seconds using online web based tool at:  http://www.md5rainbow.com/

The way to defeat the rainbow tables is to add a salt to the hash value. A salt is a random set of data that is appended to the given password which makes the cracking of the password unlikely by means of lookup tables. 

As an example, a 3 letter password containing only letters could have 17576 different possibilities(26x26x26). If another 3 letters were added before hashing, such that the final string = salt + password, there would be 308915776 permutations. The resultant space that would be required to calculate all the possibilities for all salts becomes exponentially greater as a longer salt is used. This renders generating a pre-compiled table infeasible.

Once the concatenated string has been hashed, both the salt and the hash are stored in the data store. The salt will be used later again to validate that an entered password is correct by using the same salt, hashing the resultant string and comparing it to the stored value. The storage of the salt with the password may sound counter intuitive, but it's sole purpose is to eliminate the possibility of using a pre-compiled table to crack passwords.

One more point about a salt, is that every salt must be unique. If all your records are hashed with the same salt then a determined hacker would only need to regenerate a single rainbow table for the given salt and then lookup any password. If every salt is unique, then the hacker would have to regenerate the table for every hashed password, making it much more difficult.  

Finally, there are a number of different hashing algorithms and some are better suited to particular jobs than others.

Some of the more common ones include: SHA256, MD5  and WHIRLPOOL. The SHA family of hash algorithms are probably one of the better general purpose algorithms to use, but check that any hash algorithm that you choose to use is secure.

There seems to be an amusing correlation between the history of the fight between Kevin Mitnick and Tsutomu Shimomura as portrayed in the movie Takedown and the goings on with Anonymous and HBGary. In the same way as the "expert", Shimomura, was hacked by Mitnick, HBGarry was hacked a while ago after annoying the group. By means of getting in through the company website via an SQL injection attack, breaking unsalted hashed passwords via rainbow lookup tables and some social engineering, they managed to download the company emails and splashed them all over the Internet.

What is yet to play out, is will Anonymous members be caught in the same way as Mitnick was by Shimomura.

Here is an interview with Mitnick which has a few comments about the related Lulzsec group. Amusingly, Mitnick is asked whether he can track them down if he were paid $1 million. I think that he side stepped that question carefully by not insulting the group and making himself a target, as HBGary did. 

(starts around 4 minutes)

Here's an excellent series of articles on an "average" Windows user, trying out Ubuntu Linux for the first time. I think his experience closely mirror many others, including mine. This was just one person's experience but I think his final conclusions may give interested people some perspective on the OS.

http://www.pcworld.com/businesscenter/article/229187/30_days_with_ubuntu_linux_day_1.html

To watch the Star Wars movie in a terminal, type the following and hit enter
(Works in Linux, Windows and probably Mac as well.)

telnet towel.blinkenlights.nl

Finally, Microsoft seems to making headway with Windows 8 and it seems to be heavily influenced by the Windows Phone 7 interface. A video has been released which showcases some of the new features and it looks pretty awesome. It's very different to the usual start menu/ windows/ desktop environment which has been around since windows 3.11. 




It quite obviously has to compete with the likes of Android and Mac's iOS on tablets. However, I suspect that due to its late start, it'll always be playing catch-up. Now that users have become familiar with new interfaces, it'll be hard for Microsoft to claim back market share. Finally, because Android and iOS have had such a head start, their app markets are fairly mature, and this is where the real value lies.

It'll be interesting to see how Windows 8 fairs on the tablet against Google and Mac.

I've recently come across the OWASP (The Open Web Application Security Project) and it's really opened up my eyes. http://www.owasp.org/index.php/Main_Page

According to their website:
"Our mission is to make application security visible, so that people and organizations can make informed decisions about true application security risks. Everyone is free to participate in OWASP and all of our materials are available under a free and open software license."

The thing that interests me is that they've compiled a list of top 10 security risks focusing on web applications. Most developers have heard of some of the vulnerabilities that are listed, but few really understand them and fewer "code to the doc". The document details, implications and ways of avoiding the pitfalls. I believe that all developers should be well versed with this document or at least these concepts. This way, the chances of an application and/or data being compromised will be largely negated. Security is never a yes/no question, but I this is an excellent starting point.

Part four of the series detailing the OWASP top 10 web application vulnerabilities. (See intro)


On the surface of it, this might seem to have something to do with class type objects, but actually, it doesn't... So what are we talking about?

Well, the sort of objects we're talking about here are files, directories, database records or primary keys.

I find that using a specific example is the easiest way to explain these concepts, so consider the following URL:

http://myserver/index.jsp?getfile=myreport.doc
or
http://myserver/index.aspx?getfile=myreport.doc
The idea here, is that the parameter is used to specify the file to download. This could quite easily be exploited to return a unintended file by a hacker modifying the parameter.

What would happen if a hacker modified the URL to:

http://myserver/index.aspx?getfile=/../web.configor
http://myserver/index.jsp?getfile=../../../tomcat/conf/tomcat-users.xml

In the second JSP/Java/Tomcat example, they could get hold of the config file used to store user names and passwords - instant admin access!

This isn't limited to files - records identified by their primary keys are also vulnerable.

Once again, a hacker could easily modify the unique key in the following URL to return data that they might not be suppose to access:

http://myserver/viewAccountBalance.jsp?accountNr=1234
So what do we do to avoid this situation?
Firstly, validate all input. If you're returning files, ensure that a hacker can't escape the input. This isn't always as easy as it sounds though - I recall some time ago in 2001 that IIS had a bug in it allowing directory traversals by using the overlong Unicode representations for / and . - %c0%af and %c1%9c. Using a combination of these basically bypassed all security checks and allowed executing commands on the server by inputting commands to the command shell. Basically, a hacker had complete access to the server. So be careful!

Probably most importantly, use a reference map to refer indirectly to objects, so that they're never shown publicly. If a hashtable of valid key/file names are stored server side and are referenced through the use of their keys from the browser, a hacker can't enter a file name/ record key that they're not supposed to have access to.

eg. instead of the first example above, a better approach would be:

http://myserver/index.jsp?getfileByID=2

Obviously, you only store those files in the hashtable that they're supposed to have access to, otherwise they would simply change the file id and once again access secure data.


So... Beware of your parameters and what they could be exposing.

Microsoft has just announced that it will purchase Skype for $8.5bn.

On the surface of it it seems as though it's a great purchase with possible integration with Xbox, Windows Phone 7, Live.com etc. But has it come at too high a cost?

In 2009, 70% of Skype was sold for $2bn. 2 years later, it's bought for $8.5bn? I doubt that Skype's value has increased to the degree which justifies the price and the stock market seems to agree - Microsoft's share price is down slightly for the day by 0.62%, in spite of the NASDAQ gaining 1.01%.


If Microsoft can integrate it successfully with the rest of it's products, it may be able to create an even stronger suite which might also strengthen it's case against iOS from Mac and Linux. There may not be any direct profits, but may be worth it when considering total sales and products.


It's yet too be seen whether this will be the right decision...

I know a bit random - but a weird/funny take on the new Ubuntu colour. (Not that it actually has anything to do with Ubuntu though...)



"South African software industry players are pushing for changes in legislation to help reduce piracy"

http://mybroadband.co.za/news/software/19277-Fighting-Piracy-with-the-law.html

Is there much of a point? The industry moves faster than law can keep up. Back in 2002 the Electronic Communications and Transactions Act came into being. Prior to the ECT act, there were virtually no laws governing many areas of the industry. But even by the time it came into being, there were already shortcomings/ loops.

So, are we ever going to beat the piracy... not easily.

One of the less vaunted characteristics of open source software, is that it can't be pirated, as nobody really "owns" it. Ok, it is kinda possible to include open source with proprietary software and possibly violating the licence, but it's not really the same as pirating a game. Businesses built around services as opposed to products do not suffer the losses due to pirated software.

Another emerging technology/ approach/ philosophy is cloud computing, or maybe just web based apps in general that make it impossible to pirate without infiltration the physical infrastructure.

An approach that seems to work for certain software is the augmented services offered by online subscription/ registration. For example, in certain console games, you can play online - without this feature, the game is severely limited. This acts as a some deterrent to using pirated software.

I seriously doubt that it'll ever be possible to get rid of pirated software without a combination of these factors. I'm certain that it's not going to disappear by just modifying the law.

I faced an interesting question recently in building SharePoint based InfoPath forms. The problem was that a drop down field was being populated from a SharePoint list which could potentially have 50+ values, making the control cumbersome to use. The aim was to filter the values in the drop down based on another drop down field where there was a relationship between the two.

So how do you build these cascading dropdowns?

As an example, we'll use a city/state relationship, both stored in lists, and use these fields on a new record.

Firstly, we'll create a custom List to store the State List items with only one field - State Name.



Next we'll create a list of cities with a lookup field referencing the state field.


Now that we've got the two lists representing the data sorted, create a third list, on which we will add these two fields as lookups.

 

Now, as can be seen the data that's been entered isn't valid in terms of state/city combinations. So, in order to build a form which filters the cities based on the selected state, click on the "Customise Form" button:


This will then open the form in InfoPath. Arrange the form elements appropriately and delete any controls not needed. Now, we want to filter the City control based on the selected State ID. The problem is that the State ID doesn't exist in the generated City data connection, so we have to add a new one. Click on Data-->From SharePoint List, fill in  your SharePoint site URL and click "Next". Select the City list you created from the collection of lists. On the next screen you'll notice that "ID" is selected by default, select both the "Name" field and the state field.

Now comes the magic... Right click on the City drop down list control and click on "Drop-down List Box Properties". Ensure that the new City data source is selected and click on the button next to "Entities". The d:SharePointListItem_RW item node should already be selected, now click on the "Filter Data" button. Click on "Add" to add a new filter. The condition that must be "true" is that the "State" field in the "City" data source must equal the selected "State" in the "main data source". Have a look at the attached image for a better idea...



















Finally ensure that the "Value" field that the drop down is bound to is the "ID" and not the "d:Title" which it probably selected as the default.

Now test your form using the preview function and publish back to SharePoint.
Below is a screen shot of the final SharePoint InfoPath form.

 

This concept can be extended further using text boxes instead of drop downs for a basic "word filter" functionality by using the "contains" match instead of "equals" and can be combined with multiple filters. From a usability perspective, you'll probably have to disable the city text box until a user selects a state.

I've also used a similar "Filtered Data" approach to select single values from a list once a user selects an element from a drop down list. Extending the above example, we could lookup related data from the City SharePoint list and display it in a read-only text box once the user has selected a City.



Gnome 3 has finally been launched after what seems like years! So long in fact that Canonical/Ubuntu has decided to drop it from their upcoming version. But even if you don't want to wait for the major editions to release it in their next versions, you can install it right now.

This is probably one of the biggest changes to the Gnome interface for the past few years, so it's a big change.

Have a look at the following to see how to install it:
http://digitizor.com/2011/04/07/install-gnome3-desktop-ubuntu/

I came upon an interesting article this morning about HP presenting the Linux based WebOS which it acquired when buying Palm Computing as its future and "dumping Microsoft Windows.

http://mybroadband.co.za/news/business/19279-HPs-bold-move.html

Not that WebOS is new, or that Windows is going to be totally left out in the cold, but it certainly seems to have a slightly higher profile now. So, it's going to compete head on with Android and probably a host of other Linux based OS's. Once again, the question around a fragmented Linux world comes to mind. There seems to be very little consensus - Many people and myself think that it's hindering progress to some degree. Maybe there isn't a single silver bullet to solve all problems, but do we need all these "flavours"?

I wonder how much overlap is there between the different projects? I'm pretty sure that there's a growing pool of developers, tools, code base, drivers, standards etc. which can only be a good thing. Another thing is that since HP is one of the biggest hardware manufacturers in the world, I'm guessing that hardware support can't be hurt either. So, in spite of another OS "on the market", I'm sure that overall it's going to be a positive impact.

This is part five in a list of articles in which I'm detailing the OWASP Top 10 vulnerabilities. (see intro)

What is Cross-Site Request Forgery? Cross-Site Request Forgery, one-click attack, session riding or XSRF is an attack whereby unauthorised commands are transmitted from a user that exploits the trust that a site has in a user's browser. This is also known as a confused deputy attack against a browser. The "deputy" is the user's Web browser which is confused into misusing a user's authority at an attacker's direction.

Basically, a malicious script executes on a page not related to the actual compromised site and executes a transaction on the attacked site. This could take place on sites with URLs which have side effects. For example, while logged into Facebook, a user browses to a forum with an embedded script. When this script executes, it sends a request to another site to perform some action. This could be anything from changing passwords and details, transferring money, purchasing an item etc. This vulnerability could be executed even if not currently logged into a particular site, by making use of authentication cookies. An example of a possibly vulnerable bank website is one which could execute a similar request without authorisation: http://www.myBank.com?action=transfer,fromAccount=12345,toAccount=09876,amount=$1000

The main points in order for an XSRF attack to occur are:

  • The site has a URL that has side effects (e.g. changes passwords or details, transfers money, purchasing an item etc.)
  • There are no secret authentication values or ID's that the attacker can't guess.
  • The site doesn't check the referrer header.
  • The attacker must get a victim to browse to a site with the attack script in order to execute it.
Prevention :

  • A web application should check the http referrer header.
  • Require secondary authorisation steps which cannot be forged. I've seen in certain banks, that secondary authorisation steps are required in order to transfer money by including a unique ID SMS'ed to the owner. This is because an XSRF attack is blind. That is, once a request has been made, the response is not sent back to the attacker and therefore can't authorise the transaction. Note that it is possible to simulate multiple requests by executing time delayed requests.
  • For security sensitive requests, ensure that authentication details are provided within the same http request.
  • In URLs that have side effects, ensure that there is a unique user token required.
  • Limit the lifetime of session cookies.

I've had to set up a SharePoint 2010 Virtual machine for upcoming projects. The problem is that SharePoint 2010 requires Windows 2008 server 64bit.

OK, no problem... I've got a 64 bit machine. Hmmm.... But at the time of reformatting my machine I didn't have the 64-bit Ubuntu with me, so I installed the 32 bit.

So... the question is can you install a 64 bit guest on top of a 32 bit host? Yes! Well, at least using VirtualBox - I can't really comment on other virtual technologies. The other thing is that my physical CPU is a Intel p8600 64-bit processor with VT-x

In order to get it to work, I had to enable a setting in the BIOS which allowed Virtual Box to Virtualise a 64 bit machine. This was achieved by rebooting my machine, entering the BIOS and checking the following to fields under the Virtualisation section:

Virtualisation: "This field specifies whether a Virtual Machine Monitor (VMM) can utilize the additional hardware capabilities provided by Intel(R) Virtualization Technology."

and

VT for Direct I/O: "This field specifies whether a Virtual Machine Monitor (VMM) can utilize the additional hardware capabilities provided by Intel(R) Virtualization Technology for Direct I/O."

In order to get VirtualBox to actually recognise this, I had to reboot the machine a few times - Very weird... But thhen suddenly, it worked! (Just make sure that you enable the settings for the specific virtual machine in VirtualBox as well. )

In my review of Kubuntu 10.10, I closed in saying that I probably wasn't going to keep it for very long. There seemed to be a few bugs in the UI, with some of the windows tearing. But I got used to most of the small idiosyncrasies and liked some of the features. So, I ended up running it for about 3 months. I've just reinstalled Ubuntu 10.10 on my main laptop and have once again feel much more at home. Firstly, I used a stopwatch to see how long it would take to install, remembering that I was totally blown away at the speed of the Kubuntu installation. Well, this time round it took a whopping 6 minutes, 20 seconds! As I didn't have a network connection at that point, I did have to install the extra media codecs afterwards, but all in all it was amaisingly fast!







Once started up, I was also much happier with the responsiveness - launching applications just seems much faster and makes for a better experience.

So, with my Kubuntu phase behind me and my curiosity satisfied, here's how they stack up:

Firstly, both flavours are largely built on the same codebase. The major difference is the user interface components. Kubuntu, as the naming suggests, is based on the KDE interface whereas Ubuntu is based on the Gnome interface. So the difference between the two largely comes down the personal preferences regarding these interfaces.

The Kubuntu UI looks better than Ubuntu IMO. There are tons of cool widgets to add to the desktop, the notifications are pretty nice and overall it has a more polished look. This all comes at a cost though. On my machine it didn't run very smoothly as well as the previously mentioned tearing problems. I'm not sure exactly what the problem was because I haven't experienced the same problem in Ubuntu, so I'm guessing that it's got something to do with my graphics card/ QT libraries. But remember that even though I'm comparing the base installations here, It's pretty easy to install additional components to spruce up those that you're not happy with. Eg. A new main menu.

There are similar effects using KWin to those provided by Compiz such as a 3D cube desktop and wobbly windows, but I find that the Compiz effects are a bit smoother. In general I preferred the level of customisation of the effects that can be achieved with Compiz Fusion Settings Manager.

The suspend to disk feature didn't work at all for me causing it to hang, forcing a reboot. The suspend to RAM did work correctly however. The boot times seem to be much better in Ubuntu than Kubuntu, but then again there's been much written about the work done to get the boot times to under 10 seconds on an "average" machine. I've found that there seem to be intermittent problems with the suspend in Ubuntu. This seems to be a problem with the latest releases because I've never seen this before.

When it comes to the package managers, I prefer the Ubuntu Software Centre interface of Ubuntu as opposed to Kpackagekit in Kubuntu, but they both really do the same thing.

As far as applications are concerned, there's not much of a difference. For every app in Kubuntu there's a corresponding one in Ubuntu and there's nothing that really stopping you from running KDE/QT applications in Ubuntu/Gnome. (Apart from a less optimised system in terms of memory usage.) Plus, there are many that are written for both the QT/Kubuntu and GTK/Ubuntu interfaces.

Overall, there seems to be better integration of the various non-UI components in Ubuntu. And most of the new developments such as the Me menu, Gwibber social client etc. are only found in Ubuntu.

So... Which one would I recommend...? If opinions were unanimous, this question wouldn't really exist - it would be a no-brainer. It always comes down to personal preferences. But for the purpose of some baseline recommendation...

On the performance front, without having done any formal comparisons, I think that Kubuntu seems a little less fluid because of the "heavy" graphical effects. I've heard numerous comments on how Kubuntu looks closer to Windows Vista/7 than Ubuntu does. And many people use this as their basis for deciding. I disagree - It terms of the transparent components, yes. But aside from that, Ubuntu may be more intuitive from a Windows user perspective. I also think that Ubuntu is much simpler than Kubuntu, but don't let this lead you to believe that it's not as "powerful". On the contrary - Some tasks that should be "difficult" to do like setting up a mobile wireless connection using a dongle couldn't be easier.

So... It's not that straight forward decision, but try out both an decide for yourself.

I've got some stones that I've found in my garden which are guaranteed to grant you immortality! And furthermore, I'll give you a 100% money back guarantee!

Yeh.... WHATEVER!


Power Balance Australia has been forced to admit that their product is a scam: "The Australian Competition and Consumer Commission (ACCC) has ordered
Power Balance Australia to refund all customers who feel they were
misled by the supposed benefits of Power Balance bands." ACCC Link Gizmodo

I can't believe that people still fall for this "magic" these days, or even that it's allowed to get to this point. I'm just hoping to spread the word about this and maybe stop someone being duped into paying good money for a piece of funky rubber.

This is part one of the series detailing the OWASP top 10 web application vulnerabilities. (See intro)



http://xkcd.com/327


An SQL Injection attack is a type of code injection attack where an attacker exploits a vulnerability in the database layer of an application. This can occur when user input is incorrectly filtered for escape characters. Serious system damage can be suffered such as lost data or entire databases, compromised systems etc.

A classic example of this is in login screens. A user could enter a valid username in the "username" field and a specially constructed string in the password field. If this input is not filtered correctly, the database layer could build up a SQL query which returns incorrect results. Eg, an attacker enters "Administrator" in the username field and "anything' or 1=1 --" in the password field. If the user input isn't filtered correctly, the following SQL query could be built up:

select * from user_table where user_name = 'Administrator' and password = 'anything' or 1=1 --'

Now, because (1=1) is always true, the SQL should return all users where username = 'Administrator'. Another important part is the comment characters "--". This causes the database to ignore the rest of the statement, without which the last quote at the end of the string would render the query invalid. This effectively, bypasses the password check and could allow an attacker to log into the system with any valid username. Remember that most SQL syntaxes differ slightly, so there may be many different variants of the above attack depending on the database used.

This is only a simple example, but the possibilities are endless - consider what would happen if the following statement was executed:

SELECT * FROM someTable WHERE someField = 'x'; DROP TABLE user_table; --';

This shows that it is possible to not only construct a query which returns incorrect data, but also to modify databases. It therefore doesn't take too much imagination to extend this to inserting a totally new user into the database or modifying an existing password.

Now, the question is how does an attacker learn the names of the tables or fields that some attacks rely on? Firstly, tables are normally named fairly logically, so a bit of guesswork goes a long way, but incorrect error handling is a dead give-away. A string which deliberately creates an incorrectly formatted SQL statement, will throw an exception from the database. If the entire stacktrace and message is displayed on the page, instead of an error page with a generic message, the select statement may be displayed indicating the table and field names used in the statement. This is the only opening a hacker needs...

So, Lesson 1 - As with CSS attacks, sanitise all input data.

And lesson 2 - Make sure that all exceptions are caught and make sure you handle all exceptions properly. Set up a default error page which shows a generic message.


Twitter Delicious Facebook Digg Stumbleupon Favorites More