Thursday 28 April 2011

10 tips for smarter, more efficient Internet searching

Takeaway: These days, everyone is expected to be up to speed on Internet search techniques. But there are still a few tricks that some users — and even savvy searchers — may not be aware of.
Did you hate memorizing seemingly insignificant facts for tests at school? No photographic memory? Good news! Life is now an open-book exam — assuming you have a computer, browser, and Internet access. If you know how to use a good search engine, you don’t have to stuff your mind with facts that are useful only when playing Jeopardy! and Trivial Pursuit.
Chances are, you aren’t the first person to run across the problem you are experiencing. Chances are also good that an answer is awaiting your discovery on the Internet — you just have to remove the irrelevant pages and the unhelpful/incorrect results to find that needle in the haystack.
Google has been fanatical about speed. There is little doubt that it has built an incredibly fast and thorough search engine. Unfortunately, the human element of the Internet search equation is often overlooked. These 10 tips are designed to improve that human element and better your Internet search skills. (Note: All examples below refer to the Google search engine.)
This article is also available as a PDF download.

1: Use unique, specific terms

It is simply amazing how many Web pages are returned when performing a search. You might guess that the terms blue dolphin are relatively specialized. A Google search of those terms returned 2,440,000 results! To reduce the number of pages returned, use unique terms that are specific to the subject you are researching.

2: Use the minus operator (-) to narrow the search

How many times have you searched for a term and had the search engine return something totally unexpected? Terms with multiple meanings can return a lot of unwanted results. The rarely used but powerful minus operator, equivalent to a Boolean NOT, can remove many unwanted results. For example, when searching for the insect caterpillar, references to the company Caterpillar, Inc. will also be returned. Use Caterpillar -Inc to exclude references to the company or Caterpillar -Inc -Cat to further refine the search.

3: Use quotation marks for exact phrases

I often remember parts of phrases I have seen on a Web page or part of a quotation I want to track down. Using quotation marks around a phrase will return only those exact words in that order. It’s one of the best ways to limit the pages returned. Example: “Be nice to nerds”.Of course, you must have the phrase exactly right — and if your memory is as good as mine, that can be problematic.

4: Don’t use common words and punctuation

Common terms like a and the are called stop words and are usually ignored. Punctuation is also typically ignored. But there are exceptions. Common words and punctuation marks should be used when searching for a specific phrase inside quotes. There are cases when common words like the are significant. For instance, Raven and The Raven return entirely different results.

5: Capitalization

Most search engines do not distinguish between uppercase and lowercase, even within quotation marks. The following are all equivalent:
  • technology
  • Technology
  • TECHNOLOGY
  • “technology”
  • “Technology”

6: Drop the suffixes

It’s usually best to enter the base word so that you don’t exclude relevant pages. For example, bird and not birds, walk and not walked. One exception is if you are looking for sites that focus on the act of walking, enter the whole term walking.

7: Maximize AutoComplete

Ordering search terms from general to specific in the search box will display helpful results in a drop-down list and is the most efficient way to use AutoComplete. Selecting the appropriate item as it appears will save time typing. You have several choices for how the AutoComplete feature works: Use Google AutoComplete. The standard Google start page will display a drop-down list of suggestions supplied by the Google search engine. This option can be a handy way to discover similar, related searches. For example, typing in Tucson fast will not only bring up the suggestion Tucson fast food but also Tucson fast food coupons.
Use browser AutoComplete. Use this Google start page to disable the Google AutoComplete feature and display a list of your previous searches in a drop-down box. I find this particularly useful when I’ve made dozens of searches in the past for a particular item. The browser’s AutoComplete feature must be turned on for this option to work. Click one of these links for instructions detailing how to turn AutoComplete on or off in I.E. and Firefox.
Examples:
  • Visual Basic statement case
  • Visual Basic statement for
  • Visual Basic call

8: Customize your searches

There are several other less well known ways to limit the number of results returned and reduce your search time:
  • The plus operator (+): As mentioned above, stop words are typically ignored by the search engine. The plus operator tells the search engine to include those words in the result set. Example: tall +and short will return results that include the word and.
  • The tide operator (~): Include a tilde in front of a word to return results that include synonyms. The tilde operator does not work well for all terms and sometimes not at all. A search for ~CSS includes the synonym style and returns fashion related style pages –not exactly what someone searching for CSS wants. Examples: ~HTML to get results for HTML with synonyms; ~HTML -HTML to get synonyms only for HTML.
  • The wildcard operator (*): Google calls it the fill in the blank operator. For example, amusement * will return pages with amusement and any other term(s) the Google search engine deems relevant. You can’t use wildcards for parts of words. So for example, amusement p* is invalid.
  • The OR operator (OR) or (|): Use this operator to return results with either of two terms. For example happy joy will return pages with both happy and joy, while happy | joy will return pages with either happy or joy.
  • Numeric ranges: You can refine searches that use numeric terms by returning a specific range, but you must supply the unit of measurement. Examples: Windows XP 2003..2005, PC $700 $800.
  • Site search: Many Web sites have their own site search feature, but you may find that Google site search will return more pages. When doing research, it’s best to go directly to the source, and site search is a great way to do that. Example: site:www.intel.com rapid storage technology.
  • Related sites: For example, related:www.youtube.com can be used to find sites similar to YouTube.
  • Change your preferences: Search preferences can be set globally by clicking on the gear icon in the upper-right corner and selecting Search Settings. I like to change the Number Of Results option to 100 to reduce total search time.
  • Forums-only search: Under the Google logo on the left side of the search result page, click More | Discussions or go to Google Groups. Forums are great places to look for solutions to technical problems.
  • Advanced searches: Click the Advanced Search button by the search box on the Google start or results page to refine your search by date, country, amount, language, or other criteria.
  • Wonder Wheel: The Google Wonder Wheel can visually assist you as you refine your search from general to specific. Here’s how to use this tool:
  1. Click on More Search Tools | Wonder Wheel in the lower-left section of the screen (Figure A) to load the Wonder Wheel page.
  2. Click on dbms tutorial (Figure B).

Figure A



Figure B

As you can see in Figure C, Google now displays two wheels showing the DBMS and dbms tutorial Wonder Wheels, with the results for dbms tutorial on the right side of the page. You can continue drilling down the tree to further narrow your search. Click the Close button at the top of the results to remove the Wonder Wheel(s).

Figure C


9: Use browser history

Many times, I will be researching an item and scanning through dozens of pages when I suddenly remember something I had originally dismissed as being irrelevant. How do you quickly go back to that Web site? You can try to remember the exact words used for the search and then scan the results for the right site, but there is an easier way. If you can remember the general date and time of the search you can look through the browser history to find the Web page.

10: Set a time limit — then change tactics

Sometimes, you never can find what you are looking for. Start an internal clock, and when a certain amount of time has elapsed without results, stop beating your head against the wall. It’s time to try something else:
  • Use a different search engine, like Yahoo!, Bing, Startpage, or Lycos.
  • Ask a peer.
  • Call support.
  • Ask a question in the appropriate forum.
  • Use search experts who can find the answer for you.

The bottom line

A tool is only as useful as the typing fingers wielding it. Remember that old acronym GIGO, garbage in, garbage out? Search engines will try to place the most relevant results at the top of the list, but if your search terms are too broad or ambiguous, the results will not be helpful. It is your responsibility to learn how to make your searches both fast and effective.
The Internet is the great equalizer for those who know how to use it efficiently. Anyone can now easily find facts using a search engine instead of dredging them from the gray matter dungeon — assuming they know a few basic tricks. Never underestimate the power of a skilled search expert.

5 Considerations When Evaluating ISRM Programs and Capabilities

The following are 5 key items to consider when evaluating information security and risk management (ISRM) programs and capabilities:
  1. Does a defined and business-endorsed strategy exist? It is important to assess whether an organization has developed and implemented a formal strategy for the ISRM program, that associated capabilities exist, and that the strategy has been documented and approved within the organization. A comprehensive strategy will include, at minimum, the following key elements:
    • Comprehension and acknowledgement of current business conditions
    • Governance models that will be utilized
    • Alignment with the organizational risk profile and appetite
    • Budget considerations and sourcing plans
    • Metrics and measures
    • Communication and awareness plans
  2. How effective are the methods and practices for threat, vulnerability and risk assessment? The methods and practices that are used as part of ISRM programs and capabilities to evaluate threats, vulnerabilities and risks should be consistent, repeatable and easily understood by their target audiences. These methods and practices should minimally include the following components:
    • Business process mapping
    • Asset inventory and classification
    • Threat and vulnerability analysis methodology
    • Risk assessment methodology
    • Intelligence gathering, processing and reporting capabilities
  3. What is the approach to compliance? Compliance has quickly become an integrated part of any ISRM program or capability within an organization. There are numerous external regulatory, legal and industry standards and internal policies with which organizations need to be compliant to meet their compliance goals. Ideally, compliance should be considered a starting point and not an end point of ISRM capabilities. Unfortunately, many organizations have adopted an approach called “security by compliance,” which is not only a sign of immaturity, but also may make them vulnerable to a significant number of business-impacting threats and may expose them to a wide range of risks for which they may not properly account.
  4. How are metrics and measures utilized? Metrics and measures are often used by organizations to evaluate the capabilities of their business units and functions. ISRM programs and capabilities have become more engrained within organizations as independent business functions and business units, instead of as elements within technology programs. The need for these programs and capabilities to demonstrate and monitor their business value to their constituencies, including the organizations that they serve, has become a critical consideration in organizations’ operating strategy. The metrics and measures associated with ISRM capabilities should demonstrate a focus on the value provided and the efficiency of their functional capabilities.

    Each key metric or measure (collections of multiple metrics and measures or are considered critical to the success of the organization) should also include thresholds with associated actions or activities. Metrics and measures without thresholds do not provide insights into the values they produce. Thresholds can be as simple as a notification or as complex as a trigger for a series of actions and activities that will be executed once met. The intended audiences that will be required to take an action or will be impacted by an action once the threshold is achieved should be able to easily understand the business need or justification for the action and understand the value provided to the organization.
  5. Does the program use an operational or consultative approach? Information security and risk management programs can include operational components as part of their core capabilities or can operate in an advisory and consulting capacity to the organization. If operational components are included, there should be a clear definition of expectations of the operational responsibilities and how they differentiate from other operational capabilities within the organization. There also should be documented processes and procedures for sharing information related to operational effectiveness, requirements, intelligence and incident-response activities.

    If the approach is purely an advisory and consultative approach, the services that are provided to the organization should be clearly documented, as should the level of effort and interactions with the business that will be required for the services to be successful. Providing guidance and advice without operational responsibilities often allows an ISRM organization to be viewed positively from within the organization since it is limited in its ability to prevent the organization from implementing operational capabilities to which it may not agree.

Tuesday 26 April 2011

Security Challenges In Cloud Computing

Cloud Computing is one of the biggest buzzwords in the computer world these days. It allows resource sharing that includes software, platform and infrastructure by means of virtualization. Virtualization is the core technology behind cloud resource sharing. This environment strives to be dynamic, reliable, and customizable with a guaranteed quality of service. Security is as much of an issue in the cloud as it is anywhere else. Different people share different point of view on cloud computing. Some believe it is unsafe to use cloud. Cloud vendors go out of their way to ensure security. This paper investigates few major security issues with cloud computing and the existing counter measures to those security challenges in the world of cloud computing.


Introduction

Cloud computing is a pay-per-use model for enabling convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction [1][4][5]. Typically there are three types of resources that can be provisioned and consumed using cloud: software-as-a-service, platform-as-a-service, and infrastructure-as-a-service [1][2][3][5].
Cloud computing services themselves fall into three major categories. The first type of cloud computing service is known as Software-as-a-service (SAAS). This service provides capability to the service subscribers to access provider’s software applications running on a cloud infrastructure. The service providers manage and control the application. Customer does not have to own the software but instead only pay to use it through a web API [1] [2]. For example, Google Docs relies on JAVA Script, which runs in the Web browser [3].
The second type of cloud service is called Platform-as-a-service (PaaS). It is another application delivery model. PaaS lets the consumer to deploy their applications on the providers cloud infrastructure using programming languages and tools supported by the provider. The consumer does not have to manage the underlying cloud infrastructure but has control over the deployed application [1] [2]. A recent example is the Google App Engine, a service that lets developer to write programs to run them on Google’s infrastructure [3].
The third and final type of cloud computing is known as Infrastructure-as-a-service (IaaS). This service basically delivers virtual machine images as a service and the machine can contain whatever the developers want [3]. Instead of purchasing servers, software, data center resources, network equipment, and the expertise to operate them, customers can buy these resources as an outsourced service delivered through the network cloud [2]. The consumer can automatically grow or shrink the number of virtual machines running at any given time to accommodate the changes in their requirement.
There are different kinds of cloud deployment models available. We will discuss three major types of cloud. The first one is Private cloud. This is also known as internal cloud.

This paper is organized as follows. In section 2, we briefly describe the cloud computing architecture. In section 3, we briefly describe the applications of cloud computing. In section 4, we discuss the major security challenges in cloud computing environment and their existing counter measures. In section 5, we briefly discuss the cloud related working groups. In section 6, we discuss the security standards in cloud computing.

 2 Cloud Computing Architecture:
Cloud computing system is divided into two sections: the front end and the back end. Theses two ends connect to each other usually through Internet. The front end is the user side and back end is the “cloud” section of the system. The front end includes the client’s computer and the application required to access the cloud computing system. As shown in figure 1, on the back end of the system are the various computers, servers and data storage systems that create the “cloud” of computing services [2][5][6]. A central server administers the system, monitoring traffic and client demands to ensure everything runs smoothly. It follows a set of rules called protocols and uses a special kind of software called middleware [2][5].



Cloud middleware also referred to as cloud OS, is the major system that manages and controls services. Middleware allows networked computers to communicate with each other [6]. Google App Engine and Amazon EC2/S3 are examples of cloud middleware [20]. An Application Programming Interface (APIs) for applications, acquisition of resources such as computing power and storage, and machine image management must be available to make applications suitable for network clouds [2][5][13].
In a simplified vision of the cloud computing architecture, as shown in figure 2, first of all, Client sends service requests. Then system management finds correct resources. After that, system provisioning finds correct resources. After the computing resources are found then the client request is executed. Finally, results of the service requests are sent to the clients [2][6][13].

3 Cloud Computing Applications

The applications of cloud computing are practically limitless. With the right middleware, a cloud computing system can practically run all the applications a personal computer can run.
- Clients will be able to access their applications and data at any time from anywhere using any computer linked to Internet [6].
- Traditionally, Organizations that rely on computers for their operations have to buy all the required software or software licenses for every employee. Cloud computing system gives an option to these organizations to get access to all the required computer applications without even buying those applications. Instead, company can pay a pay-per-use fee to a cloud service provider [4] [6].
- Cloud computing system will reduce the hardware costs on client side. User will not have to buy the computer with most memory, nor has he to buy the large hard drive to store his data. Cloud system will take care of this client’s need. Client just have to buy a computer terminal with a monitor, input devices with just enough processing power to run the middleware necessary to connect to the cloud system [4][6][[18].
- In most of the companies servers and digital storage devices take up a huge space. Some companies do not have a large physical apace available on-site so they rent space to store their servers and databases. Cloud computing system gives these companies an option to store their data on someone else’s (cloud service providers) hardware thus freeing these companies of requirement to have their own physical space on the client side [6] [17].
- Client can make use of cloud system’s huge processing power. Like in grid computing, client can send huge complex calculations on cloud for processing. Sometimes complex calculations can take years for individual computer to compute. The cloud system in this case will use the processing power of required number of available computers on the back end to speed up the calculation [1][6][8].
Cloud computing offers significant advantage over the traditional computing system but it has its own issues.
In the next section we discuss about the major security challenges in cloud computing environment and their existing counter measures.

4 Cloud Computing Challenges
Security and privacy are the two major concerns about cloud computing. In the cloud computing world, the virtual environment lets user access computing power that exceeds that contained within their physical world. To enter this virtual environment a user is required to transfer data throughout the cloud. Consequently several security concerns arises [4] [7] [8] [16].

4.1 Information Security
It is concerned with protecting the confidentiality, integrity and availability of data regardless of the form the data may take [9].


- Losing control over data: Outsourcing means losing significant control over data. Large banks don’t want to run a program delivered in the cloud that risk compromising their data through interaction with some other program [3][10]. Amazon Simple Storage Service (S3) APIs provide both bucket- and object level access controls, with defaults that only permit authenticated access by the bucket and/or object creator. Unless a customer grants anonymous access to their data, the first step before a user can access data is to be authenticated using HMAC-SHA1 signature of the request using the user’s private key [9][15][16]. Therefore, the customer maintains full control over who has access to their data. [13].
- Data Integrity: Data integrity is assurance that data changes only in response to authorized transactions. For example, if the client is responsible for constructing and validating database queries and the server executes them blindly, the intruder will always be able to modify the client-side code to do whatever he has permission to do with the backend database. Usually, that means the intruder can read, change, or delete data at will [3]. The common standard to ensure data integrity does not yet exists [8]. In this new world of computing users are universally required to accept the underlying premise of trust. In fact, some have conjectured that trust is the biggest concern facing cloud computing [7].
- Risk of Seizure: In a public cloud, you are sharing computing resources with other companies.. Exposing your data in an environment shared with other companies could give the government “reasonable cause” to seize your assets because another company has violated the law. Simply because you share the environment in the cloud, may put data at risk of seizure [4][8]. The only protection against the risk of seizure for user is to encrypt their data. The subpoena will compel the cloud provider to turn over user’s data and any access it might have to that data, but cloud provider won’t have user’s access or decryption keys. To get at the data, the court will have to come to user and subpoena user. As a result, user will end up with the same level of control user have in his private data center [4][16].
- Incompatibility Issue: Storage services provided by one cloud vendor may be incompatible with another vendor’s services should you decide to move from one to the other. Vendors are known for creating what the hosting world calls “sticky services” – services that an end user may have difficulty transporting from one cloud vendor to another. For example, Amazon’s “Simple Storage Service” [S3] is incompatible with IBM’s Blue Cloud, or Google, or Dell [4][8][13]. Amazon and Microsoft both declined to sign the newly published Open Cloud Manifesto. Amazon and Microsoft pursue interoperability on their own terms [11][12][14].
- Constant Feature Additions: Cloud applications undergo constant feature additions, and users must keep up to date with application improvements to be sure they are protected. The speed at which applications will change in the cloud will affect both the SDLC (Software development life cycle) and security [4][8]. Updates to AWS infrastructure are done in such a manner that in the vast majority of cases they do not impact the customer and their Service use [9][13]. AWS communicates with customers, either via email, or through the AWS Service Health Dashboard when there is a chance that their Service use may be affected [9].
- Failure in Provider’s Security: Failure of cloud provider to properly secure portions of its infrastructure – especially in the maintenance of physical access control – results in the compromise of subscriber systems. Cloud can comprise multiple entities, and in such a configuration, no cloud can be more secure than its weakest link [3][7]. It is expected that customer must trust provider’s security. For small and medium size businesses provider security may exceed customer security. It is generally difficult for the details that help ensure that the right things are being done [3][7].
- Cloud Provider Goes Down: This scenario has a number of variants: bankruptcy, deciding to take the business in another direction, or a widespread and extended outage. Whatever is going on, subscriber risk losing access to their production system due to the actions of another company. Subscriber also risk that the organization controlling subscriber data might not protect it in accordance with the service levels to which they may have been previously committed [4]. The only option user have is to chose a second provider and use automated, regular backups, for which many open source and commercial solutions exist, to make sure any current and historical data can be recovered even if user cloud provider were to disappear from the face of the earth [4].

4.2 Network Security

Network security measures are needed to protect data during their transmission, between terminal user and computer and between computer and computer [21][22].
- Distributed Denial of Service (DDOS) Attack: In DDOS attack servers and networks are brought down by a huge amount of network traffic and users are denied the access to a certain Internet based Service. In a commonly recognized worst-case scenario, attackers use botnets to perform DDOS. In order to stop hackers to stop attacking the network, subscriber or provider face blackmail [21][14]. Amazon Web Service (AWS) Application Programming Interface (API) endpoints are hosted on large, Internet-scale, world-class infrastructure that benefits from the same engineering expertise that has built Amazon into the world’s largest online retailer. Proprietary DDOS mitigation techniques are used. Additionally, Amazon’s networks are multi-homed across a number of providers to achieve Internet access diversity [9].
- Man in the Middle Attack: This attack is a form of active eavesdropping in which the attacker makes independent connections with the victims and relays messages between


them, making them believe that they are talking directly to each other over a private connection when in fact the entire conversation is controlled by the attacker [21]. All of the AWS APIs are available via SSL-protected endpoints which provide server authentication. Amazon EC2 AMIs automatically generate new SSH host certificates on first boot and log them to the instance’s console. Customers can then use the secure APIs to call the console and access the host certificates before logging into the instance for the first time. Customers are encouraged to use SSL for all of their interactions with AWS [9].
- IP Spoofing: Spoofing is the creation of TCP/IP packets using somebody else’s IP address. Intruder gain unauthorized access to computer, whereby he sends messages to a computer with an IP address indicating that the message is coming from a trusted host. [21][22]. Amazon EC2 instances cannot send spoofed network traffic. The Amazon-controlled, host-based firewall infrastructure will not permit an instance to send traffic with a source IP or MAC address other than its own [9].
- Port Scanning: If the Subscriber configures the security group to allow traffic from any source to a specific port, then that specific port will be vulnerable to a port scan. Since a port is a place where information goes into and out of the computer, port scanning identifies open doors to a computer [21]. There is no way to stop someone from port scanning your computer while you are on the Internet because accessing an Internet server opens a port which opens a door to your computer [8]. Port scans by Amazon Elastic Compute Cloud (EC2) customers are a violation of the Amazon EC2 Acceptable use Policy (AUP). Violations of the AUP are taken seriously, and every reported violation is investigated. Customers can report suspected abuse. When port scanning is detected it is topped and blocked. Post scans of Amazon EC2 instances are generally ineffective because, by default, all inbound ports on Amazon EC2 instances are closed and are only opened by the customer [9].
- Packet Sniffing: Packet sniffing by Other Tenants: Packet sniffing is listening (with software) to the raw network device for packets that interest you. When that software sees a packet that fits certain criteria, it logs it to a file. The most common criteria for an interesting packet is one that contains words like “login” or “password” [21][22]. It is not possible for a virtual instance running in promiscuous mode to receive or “sniff” traffic that is intended for a different virtual instance. While customers can place their interfaces into promiscuous mode, the hypervisor will not deliver any traffic to them that is not addressed to them [9]. Even two virtual instances that are owned by the same customer, located on the same physical host, cannot listen to each other’s traffic. Attacks such as ARP cache poisoning do not work within Amazon EC2. While Amazon EC2 does provide ample protection against one customer inadvertently or maliciously attempting to view another’s data, as a standard practice customers should encrypt sensitive traffic [9]

4.3 Security Issues

They are more complex in a virtualized environment because you now have to keep track of security on two tiers: the physical host security and the virtual machine security. If the physical host server’s security becomes compromised, all of the virtual machines residing on that particular host server are impacted. And a compromised virtual machine might also wreak havoc on the physical host server, which may then have an ill effect on all of the other virtual machines running on that same host [23].
Instance Isolation: Isolation ensuring that different instances running on the same physical machine are isolated from each other. Virtualization efficiencies in the cloud require virtual machines from multiple organizations to be co-located on the same physical resources. Although traditional data center security still applies in the cloud environment, physical segregation and hardware-based security cannot protect against attacks between virtual machines on the same server [18]. Administrative access is through the Internet rather than the controlled and restricted direct or on-premises connection that is adhered to in the traditional data center model. This increase risk of exposure will require stringent monitoring for changes in system control and access control restriction [8]. Different instances running on the same physical machine are isolated from each other via Xen hypervisor. Amazon is active in the Xen community, which ensures awareness of the latest developments. In addition, the AWS firewalls reside within the hypervisor layer, between the physical network interface and the instance’s virtual interface. All packets must pass through this layer, thus an instance’s neighbors have no more access to that instance than any other host in the Internet and can be treated as if they are on separate physical hosts. The physical RAM is separated using similar mechanisms [9].
Host Operating System: Administrators with a business need to access the management plans are required to us multi-factor authentication to gain access to purpose-built administration hosts. These administrative hosts are systems that are specifically designed, built, configured, and hardened to protect the management plane of the cloud. All such access is logged and audited. When an employee no longer has a business need to access the management plane, the privileges and access to those hosts and relevant systems are revoked [18].
Guest Operating System: Virtual instances are completely controlled by the customer. Customers have full root access or administrative control over accounts, services, and applications. AWS does not have any access rights to customer instances and cannot log into the guest OS. AWS recommends a base set of security best practices including: customer should disable password-based access to their hosts, and utilize some form of multi-factor authentication to gain access to their instances, or at a minimum certificate-based


SSH Version 2 access [9][13][15]. Additionally, customers should employ a privilege escalation mechanism with logging on a per-user basis. For example, if the guest OS is Linux, After hardening their instance, they should utilize certificate-based SSHv2 to access the virtual instance, disable remote root login, use command-line logging, and use ‘sodu’ for privilege escalation. Customers should generate their own key pairs in order to guarantee that hey are unique, and not shared with other customers or with AWS [9]. AWS Multi-Factor Authentication (AWS MFA) is an additional layer of security that offers enhanced control over AWS account settings. It requires a valid six-digit, single-use code from an authentication device in your physical possession in addition to your standard AWS account credentials before access is granted to an AWS account settings. This is called Multi-Factor Authentication because two factors are checked before access is granted to your account: customer need to provide both their Amazon email-id and password (the first “factor”: something you know) AND the precise code from customer authentication device (the second “factor”: something you have).

4.4 General Security Issues
In addition to the above mentioned issues there are few other general security issues that are delaying cloud computing adoption and needs to be taken care of.
Data Location: When user uses the cloud, user probably won’t know exactly where his data is hosted, what country it will be stored in [3][4][8]? Amazon does not even disclose where their data centers are located. They simply clam that ach data center is hosted in a nondescript building with a military-grade perimeter. Even if customer know that their database server is in the us-east-1a availability zone, customer do not know where that data center9s0 behind that availability zone is located, or even which of he three East Coast availability zones us-east-1a represents [4].
Data Sanitization: Sanitization is the process of removing sensitive information from a storage device. In cloud computing users are always concerned about, what happens to data stored in a cloud computing environment once it has passed its user’s “use by date” [18]. When a storage device has reached the end of its useful life, AWS procedures include a decommissioning process that ensures customer data are not exposed to unauthorized individuals. AWS uses the technique DoD 5220.22-M as per National Industrial Security Program Operating manual to destroy data, as part of the decommissioning process [9][13]. When item and attribute data are deleted within a domain, removal of the mapping within the domain starts immediately, and is also generally complete within seconds. Once the mapping is removed, there is no remote access to the deleted data. The storage area is then made available only for write operations and the data are overwritten by newly stored data [9].
Job Starvation due to some virus or worm: It is where one job takes up a huge amount of resource resulting in a resource starvation for the other jobs. Customer can reserve the resources in advance. Customer can also reduce the priority of the affected tasks/job [16] [18].
In the next section of our paper we discuss about the various cloud related working groups and their contribution in the cloud computing environment.

5 Cloud Related Working Groups
A working group is an assembled, cooperative collaboration of researchers working on new research activities that would be difficult for any one member to develop alone. Working groups generally strive to create an informational document a standard, or find some resolution for problems related to a system or network. Most often, the working group attempts to assemble experts on a topic. Working groups are sometimes also referred to as task groups or technical advisory groups.
The Open Cloud Consortium (OCC) is organized into several different working groups [8]. For example, the working group on Standards and Interoperability for Clouds. The purpose of the OCC is to support the development of standards for cloud computing and to develop framework for interoperability among various clouds [19]. There is also a working group on wide area clouds and the impact of network protocols on clouds. The focus of this working group is on developing technology for wide area clouds, including creation of methodologies and benchmarks to be used for evaluating wide area clouds. This working group is tasked to study the applicability of variants of TCP and the use of other network protocols for clouds.
The working group on information sharing, security and clouds has a primary focus on standards and standard-based architectures for sharing information between clouds. This is especially true for clouds belonging to different organizations and subject to possibly different authorities and policies. This group is also concerned with security architectures for clouds. Finally, there is an Open Cloud Test-bed working group that manages and operates the open cloud test-bed [19].
Another very active group in the field of cloud computing is Distributed management Task Force (DMTF) [8]. According to their web site, the distributed management task force enables more effective management of millions of IT systems worldwide by bringing the IT industry together to collaborate on the development, validation and promotion of systems management standards [24][25].
This group spans the industry with 160 member companies and organizations, and more than 4,000 active participants crossing 43 countries. The DMTF board of



directors id led by 16 innovative, industry- leading technology companies.
The DMTF started the Virtualization Management Initiative (VMAN). The VMAN unleashes the power of virtualization by delivering broadly supported interoperability and portability standards to virtual computing environments. VMAN enables IT managers to deploy preinstalled, pre configured solutions across heterogeneous computing networks and to manage those applications through their entire life cycle [20][25].
In the next section we discuss about the major security standards for cloud computing and their application in cloud computing environment.

6 Standards for Security in Cloud Computing

Security standards define the processes, procedures, and practices necessary for implementing a security program. These standards also apply to cloud related IT activities and include specific steps that should be taken to ensure a secure environment is maintained that provides privacy and security of confidential information in a cloud environment. Security standards are based on a set of key principles intended to protect this type of trusted environment. A basic philosophy of security is to have layers of defense, a concept known as defense in depth. This means having overlapping systems designed to provide security even if one system fails. An example is s firewall working in conjunction with intrusion-detection system (IDS). Defense in depth provides security because there is no single point of failure and no single entry vector at which an attack can occur. For this reason, a choice between implementing network security in the middle part of a network (i.e., in the cloud) or at the endpoints is a false dichotomy [8]. No single security system is a solution by itself, so it is far better to secure all systems. This type of layered security is precisely what we are seeing develop in cloud computing. Traditionally, security was implemented at the endpoints, where the user controlled access. An organization had no choice except to put firewalls, IDSs, and antivirus software inside its own network. Today, with the advent of managed security services offered by cloud providers, additional security can be provided inside the cloud [8][9].
Security Assertion Markup Language (SAML): SAML is an XML-based standard for communicating authentication, authorization, and attribute information among online partners. It allows businesses to securely send assertions between partner organizations regarding the identity and entitlements of a principal. SAML standardizes queries for, and responses that contain, user authentication, entitlements, and attribute information in an XML format. This format can then be used to request security information about a principal from a SAML authority. A SMAL authority, sometimes called the asserting party, is a platform or application that can relay security information. The relying party or assertion consumer or requesting party is a partner site that receives the security information. The exchanged information deals with a subject’s authentication status, access authorization, and attribute information. A subject is an entity in a particular domain by an email address is a subject, as might be a printer [8]. SAML is built on a number of existing standards, namely, SOAP, HTTP and XML. SAML relies on HTTP as its communications protocol and specifies the use of SOAP.
Open Authentication (OAuth): OAuth is an open protocol, initiated by Blaine Cook and Chris Messina, to allow secure API authorization in a simple, standardized method for various types of web applications. OAuth is a method for publishing and interacting with protected data. For developers, OAuth provides users access to their data while protecting account credentials. It also allows users to grant access to their information, which is shared by the service provider and consumers without sharing all of their identity. OAuth is the baseline, and other extensions and protocols can be built on it. By design, OAuth Core 1.0 does not provide many desired features, like automated discovery of endpoints, language support, support for XML-RPC and SOAP, standard definition of resource access, OpenID integration, signing algorithms, etc [8]. The core deals with fundamental aspects of the protocol, namely, to establish a mechanism for exchanging a user name and password for a token with defined rights and to provide tools to protect the token. It is important to understand that security and privacy are not guaranteed by the protocol. In fact, OAuth by itself provides no privacy at all and depends on other protocols such as SSL to accomplish that.
OpenID: It is an open, decentralized standard for user authentication and access control. It allows users to log onto many services using the same digital identity. It is a single-sign-on (SSO) method of access control. OpenID replaces the common log-in process, i.e. a log-in name and a password, by allowing users to log in once and gain access to resources across participating systems. An OpenID is in the form of a unique URL and is authenticated by the entity hosting the OpenID URL [9]. The OpenID protocol does not rely on a central authority to authenticate a user’s identity. Neither the OpenID protocol nor any websites requiring identification can mandate that a specific type of authentication be used; nonstandard forms of authentication such as smart cards, biometrics, or ordinary password are allowed [8].
SSL/TLS: Transport Layer Security (TLS) and its predecessor, Secure Sockets Layer (SSL), are cryptographically secure protocols designed to provide security and data integrity for communications over TCP/IP. TLS and SSL encrypt the segments of network connections at the transport layer. The TLS protocol allows client/server applications to communicate across a network in a way


specifically designed to prevent eavesdropping, tampering, and message forgery [21]. TLS provides endpoint authentication and data confidentiality by using cryptography. TLS authentication is one way- the server is authenticated, because the client already knows the server’s identity. In this case, the client remains unauthenticated [12] . TLS also supports a more secure bilateral connection mode whereby both ends of the connection can be assured that they are communicating with whom they believe they are connected. This is known as mutual (assured) authentication. TLS involves three basic steps. The first step deals with peer negotiation for algorithm support. During this phase, the client and server negotiate cipher suites, which determines which ciphers are used. In the next step, key exchange and authentication is decided. During this phase, a decision is made about the key exchange and authentication algorithm to be used, and determine the message authentication codes. The key exchange and authentication algorithms are typically public key algorithms. The finals step is about the symmetric cipher encryption and message encryption. The message authentication codes are made up from cryptographic hash functions. Once these decisions are made, data transfer may begin [9][12].
7 Conclusions
The cloud computing phenomenon is generating a lot of interest worldwide because of its lower total cost of ownership, scalability, competitive differentiation, reduced complexity for customers, and faster and easier acquisition of services. While cloud offers several advantages, people come to the cloud computing topic from different points of view. Some believe that cloud to be an unsafe place. But few people find it safer then their own security provisioning, especially small businesses that do not have resources to ensure the necessary security themselves. Several large financial organizations and some government agencies are still holding back. They indicate that they will not consider moving to cloud anytime soon because they have no good way to quantify their risks. To gain total acceptance from all potential users, including individuals, small businesses to Fortune 500 firms and government, cloud computing require some standardization in the security environment and third-party certification to ensure that standards are met.



How to get people to use strong passwords

Takeaway: Can passwords be both secure and easy to use? Some think “no”, but if you stop there, you simply aren’t thinking enough.

As noted in “Don’t be fooled by the argument against unique passwords,” people giving bad security advice claiming that good security practices are impractical and should be ignored seem to be a surprisingly common sight. As the world’s security issues become more prevalent and problematic, the rate of such incredibly bad advice being offered seems to increase, and can do increasing amounts of damage where people buy into their message.
An example from 2007 that I have only just encountered is “The Usability of Passwords,” which begins:

Security companies and IT people constantly tells us that we should use complex and difficult passwords. This is bad advice, because you can actually make usable, easy to remember and highly secure passwords. In fact, usable passwords are often far better than complex ones.
The underlying assumptions of this statement are several, and almost invariably wrong. First, it suggests that password strength through complexity necessarily makes passwords unusable. Second, it suggests that simple passwords can be “strong enough” for almost all purposes. Third, it subtly suggests — like the majority of such overly facile arguments — that convenience trumps security all the time. Finally, it suggests that its author Thomas Baekdal knows more about security than people who actually study its intricacies. While this may or may not be true in and of itself (at least in some cases) with no further evidence than the first paragraph of The Usability of Passwords, the rest of it serves to demolish such a hypothesis when his incomplete understanding of the factors involved in password security is displayed.

Before proceeding, let us demolish his entire argument with a single fact.

Strong passwords are easy to manage

A good password manager solves the problem of password usability quite neatly in at least 98% of cases. In fact, employing a good password manager can actually make password use easier than heeding the simplistic, dangerous advice of most of these advocates for bad password policy. Using one really strong password for all the “important” sites, and one really weak password for all the “unimportant” sites, requires remembering two passwords, where a password manager only requires remembering one; using differing weak passwords everywhere requires remembering many passwords, which does not really make things much more convenient; and using passphrases made up of easy to remember terms in one’s native language requires a lot more typing than a single password for a password management application. Meanwhile, choosing a password manager carefully and configuring the software well can result in a relatively pleasant experience protecting one’s security. The software I typically use is pwsafe with a keyboard shortcut driven interface for the X Window System — secure, simple, well-designed, and thanks to the addition of my keyboard-driven interface wrapper, it is also very easy to use.
For those using Microsoft’s flagship OS, another earlier article covered how to use Password Safe on Microsoft Windows 7. In discussion following both this article and the article discussing how to set up a keyboard driven interface for pwsafe, other suggestions of password manager applications were offered.

The incomplete arguments

Much of the problem with non-secure security arguments like Thomas Baekdal’s is an incomplete understanding of how things actually work. Examples drawn from “The Usability of Passwords” are numerous, easily deconstructed, and easily refuted by a more thoughtful individual than its author. Each of them starts with a seed of truth, and proceeds to wander outside the realm of applicability and accuracy when he starts filling in the holes in his understanding with poorly considered conjecture.

How to crack a password

In The Usability of Passwords, its author explains how passwords are cracked (showing his intimate knowledge of the subject by misusing the term “hack” where “crack” would be correct). He lists five possibilities:
  1. Asking
  2. Guessing
  3. Brute Force Attacks
  4. Common Word Attacks
  5. Dictionary Attacks
He (incorrectly, I believe) claims that the most common methods of cracking a password are asking and guessing. While it is true that people asking others for their passwords is probably the most common way that someone might gain access to another’s password, the vast majority of these cases involve honest people who intend no harm and are simply trying to get their work done — or, in some cases, even trying to help the person whose password they acquire in this manner. As for guessing, it boggles the mind to consider the notion that in this age of profitable, computer powered, automated security compromises someone would think that a security cracker sitting down in front of someone’s computer and typing in strings of characters to try to gain access to password protected resources lands within the top ten approaches to cracking password security. Do not let CSI: New York fool you into thinking that is a common approach, relative to other approaches Thomas Baekdal listed.
The closest we might get to truly common usage of the “guessing” approach he describes — using information about an individual to inform one’s attempts to guess — is using information about an individual to prioritize the potential password combinations used in a brute force attack.
The most sophisticated attacks that actually rely on coming up with the password itself are, in some respects, all brute force attacks. What he terms a “common word attack” is in fact a dictionary attack using a “dictionary” smaller than the Oxford English Dictionary. A dictionary attack is just a way to prioritize a brute force attack so that the most likely passwords to be used by security-unconscious users are tried first (or perhaps only). Contrary to his description, in fact, most automated dictionary attacks start with “love”, “god”, “password”, and several other common terms first, rather than just going through the OED in alphabetical order. If the way he described things was accurate, picking “zen” as your password would improve security substantially, while “aardwolf” would mean almost instant compromise (beaten only by “aardvark”, proper names, and initialisms like AAPSS). In the real world, “password” comes before either of those options in the vast majority of cases.

How to protect yourself

Following the rhetorical question, “When is a password secure?” we are treated to some dubious claims. For instance, the first claim made:
You cannot protect against “asking” and “guessing”, but you can protect yourself from the other forms of attacks.
Well beyond dubious, this is quite blatantly false. To protect yourself from “asking”: Never tell anyone else your password.
As for protecting yourself from “guessing”, Thomas Baekdal himself provided the answer to this problem when he first brought it up:

This is the second most common method to access a person’s account. It turns out that most people choose a password that is easy to remember, and the easiest ones are those that are related to you as a person. Passwords like: your last name, your wife’s name, the name of your cat, the date of birth, your favorite flower etc. are all pretty common. This problem can only be solved by choosing a password with no relation to you as a person.
The next dubious claim immediately follows:
A hacker will usually create an automated script or a program that does the work for him. He isn’t going to sit around manually trying 500,000 different words to see if one of them is your password. The measure of security must then be “how many password requests can the automated program make - e.g. per second”. The actual number varies, but most web applications would not be capable of handling more than 100 sign-in requests per second.
It is true that a security cracker is likely to use an automated process to crack a password rather than typing everything by hand (and here he contradicts his own earlier statements about “asking” and “guessing” being the most common). It is also true that the amount of time it takes to try out enough passwords to successfully select the needed password is effectively the measure of password strength. This claim implies that his rule of no more than 100 password tries per second is invariant and reliable, however. It is not. While 100 per second may be optimistic on the security cracker’s part if his script tries to actually sign in via a Web form provided by a server somewhere out there on the Internet, this is far from the only way to crack a password. In fact, most cases of cracking large numbers of passwords involve someone gaining access to a database of encrypted or plaintext passwords, password hashes, or other server-side login data that allows an offline attack. Utilizing technologies like GPU password cracking frameworks and SSD storage of password hash databases for faster access times, fourteen character passwords have been cracked in around five seconds — much more quickly than the three minutes Thomas Baekdal asserts it would take to crack the password “sun” using a brute force attack. The difference is that in the real world, security crackers are often not so naive as to try to brute force a password via a Web login form.
We then encounter the discussion of how difficult a cracking job is “difficult enough”. The claim is made that a password that takes ten years to crack (using the limit of 100 requests per second described above) is unlikely to ever be cracked. His words:

10 years - Now we are talking purely theoretical.
Even if we choose passwords that take ten years to crack at the rates provided by GPU password cracking frameworks in local attacks with SSD storage for the password hash database, this is not “purely theoretical”. It is actually quite predictably going to take much less time than ten years to crack such a password at some point in the future — possibly as soon as tomorrow, if someone comes up with yet another clever trick to speed up the cracking process. With that in mind, this statement of his becomes so naive as to be laughable:
I want a password that takes 1,000 years to crack- let’s call this “secure forever”.
Maybe, instead of “forever”, it can be cracked in an afternoon when someone invents a new approach to password cracking six months from now. Worse yet, the rate of technological advancement is accelerating exponentially.

The answer to our prayers is “NO”

After a set of contrived examples of various passwords and their strengths, Thomas Baekdal has set us up for the Big Reveal. The result:
this is fun
He claims this password — actually three distinct, short, common words separated by spaces — takes about 2.5K years to crack. In truth, it takes far less time than that using the GPU cracking methods already mentioned, because it is basically just an eleven word password selected from a set of only 27 possible characters (26 lower case letters and the space character). All he has done is reinvent the concept of the “passphrase”, poorly. Passphrases, loudly touted by many as The Most Secure Thing EVAR, do not in fact increase both convenience and security the way people seem to think. All they do is rearrange the math used to determine how easily a password is cracked.
Tell me — which do you think is more difficult to crack?
  • this is fun
  • &n” <` O C[
If I was a betting man, I would not put my money on "this is fun". It should be immediately obvious that the first is easier just for the simple fact that this is fun makes use of a set of characters represented by 27 keys on your keyboard with no modifiers (the Shift key, for instance), while &n" <` O C[ makes use of a set of 86 characters used by a random password generator. To give you an idea of how much of a difference this makes, Thomas Baekdal's example offers 5,559,060,566,555,523 different possibilities from his implied character set, or over five quadrillion. My eleven character password offers 1,903,193,578,437,064,103,936 different possibilities, or almost two sextillion -- almost 20,000,000 (twenty million) times as many possibilities. It gets even better if I want a longer password, because adding just one more character multiplies the number of possibilities by 86.
For anything nontrivial, though, I do not trust an eleven character password, using a character set of 86. Twenty is a reasonably good minimum number of characters today, for only moderately important cases. With a good password manager, I can use fifty character passwords if I want to -- as long as the login system allows them -- and do it much more easily than remembering a different equivalent to "this is fun" for each such case. After a while, anyone might forget which site uses "this is fun" and which uses "my alsatian spot".
That, of course, completely ignores an even bigger problem with passphrases like "this is fun". Sure, it is an eleven character password with spaces in it, but that is not all it is. It is also a non-random eleven character password. Given that all the words involved are common words, we now face the problem of using a passphrase having done nothing more profound than moving the goalposts a little. Once the kicker gets lined up, aiming is not so much more difficult than it was before the goalposts moved.
From a certain perspective, "this is fun" is just a password with three "characters" in it. The difference is that the "characters" used are taken from a larger character set. A commonly used password cracking wordlist -- the Openwall wordlist collection -- has four million entries in it, but the majority of these can be omitted when performing an initial cracking attempt against an English-speaking user so naive as to use something like "do the dishes" on Thomas Baekdal's advice. The much shorter, public domain list Openwall offers as a free download contains only 3,158 entries. That is a mere 31,494,620,312 possibilities, or about 31.5 billion. Throw in a few articles, particles, and participles, and you are still well under 4K words.
If your security cracker is the least bit sophisticated, the script used to crack passphrases will choose words by parts of speech to fit common patterns. Suddenly, 31.5 billion options will become something much more modest. If the cracking script assumes any two words from the unaltered Openwall list, with one of "is", "was", "can", "cannot", "for", "will", "should", "looks", or "seems" between them, your number of possibilities drops from 31.5 billion to a mere 89,756,676, or under 90 million, less than one third of one percent the number of options.
. . . and we are still including non-random, non-word strings of characters such as "2kids", "1022", and "!@#$%^" amongst the "words" that might be used. Really, to use examples of Thomas Baekdal's system for creating passwords, we should exclude a lot of those -- and end up with something on the order of only 2,500 words. That brings our hypothetical password cracking script's first pass through possible passphrases down to fewer than seven million possibilities. Even at a rate of one hundred tries per second, already discarded as naive when someone might have access to a database of password hashes, we are now looking at a maximum time to crack "this is fun" under twenty hours.
That is much less time than the 2,537 years he claims, and I am not even trying that hard.

Catching up with today

Somewhere along the line, someone must have told him about the flaws in his reasoning, because he offered this defense:
You need direct access to the database file or the server to hack something faster. If you got that level of access, you do not actually need the password. You can just look up the data directly.
This ignores a number of factors, including (but not limited to):
  • The password hash database might be more accessible than other data; authentication might be performed on a separate server from financial data.
  • The user, following the advice of a non-expert like Thomas Baekdal, might use one password everywhere -- thus making a weak password a vulnerability for many other sites.
  • As demonstrated, a less naive approach to password cracking than he imagined might actually work within a single day even under his unlikely constraints.
  • Password databases on stolen smartphones and laptops could provide password hashes without providing any other data from the server the password might be used to access.
  • A flaw in the authentication application could conceivably expose the server to unforeseen work-arounds, and the strength of your password might be the only mitigation.
Other problems may come to mind with more thought. Of course, he takes the defensive measure of trying to exclude offline attacks from his analysis:

What I am talking about in the article is hacking into remote systems (like web apps).
The problem is that the world does not conform to our wishes. Assuming that our passwords are immune to offline attacks just because he decided to ignore them in his Weblog entry is yet another incredibly naive part of his perspective. Simply pushing away the potential problem of server-side issues that may undermine his argument, such as telling us that passwords are easier to crack if the server admins do not add punitive delays when incorrect passwords are submitted, does not in any way make it less of a problem. The fact of the matter is that people implement crappy authentication systems all the time, and sometimes we use those system -- often without even knowing how badly they were implemented on the back end. Choosing passwords that take longer to crack is a way to mitigate the danger of such poorly designed systems. Saying "it's not my problem" when someone mentions that the server might not be set up to look after your security is nothing more than a good way to provide excuses for not caring about your security.
Yes, the server admin should set up a secure system. No, recognizing that fact does not mean you should not worry about it, even if you are not the server admin.

How I learned to stop worrying and love the password

The real reason people hate strong passwords so much is probably that they are being told to use them by people who, while they are a bit smarter about the difficulty of cracking passwords than Thomas Baekdal, are sometimes downright stupid about security policy enforcement. Telling people they should use long passwords containing capital and lower case letters, numbers, spaces, and special characters just makes the technically uninclined sigh and complain. In "The Usability of Passwords - FAQ," Thomas Baekdal's 2011 follow-up to "The Usability of Passwords," he actually addresses this problem when he discusses why he wrote the original Weblog entry:
The article came to life after yet another discussion with IT, who believed that everyone should be forced to use password with a minimum of eight characters, including two uppercase characters, numbers and a least one special character. I was absolutely furious for several reasons. First, I knew it was like kicking every employee in the groin every morning they showed up for work, that it would do squat for actual security (it is likely to make it worse), and that it would completely destroy the plans I had for password free web application I was working on at the time.
If I had to type in a password like >9Llz]HX2×8w.5&Go{$k~5pIz&{ every day when I checked my email, another like Rk~b$icSNGz:+1C8`Vp <-~q@un6O to check my bank balance, something like gjj-~ixnCs3{/7yS]r(BW#S,q1?9 to log in at TechRepublic, and many more for every site on the Web, I would hate strong passwords too. I type exactly one password for everything on the Web, though. I do not use the same password at every site; I just use a password manager, and let that remember all those different passwords of mine.
The key to getting people to stop worrying and love the password is to stop telling people first and foremost to use strong passwords for everything. Yes, they should use strong passwords — but they should let the password manager handle it for them. Tell them instead to use a password manager, and to configure it to use strong passwords on their behalf.
For more than 98% of cases, your problem is solved by a password manager, and you should not end up being that guy who posts a lengthy explanation of how little he actually understands about passwords and security, thinly disguised as bad advice that a lot of people might actually follow.