Virtual private network
A virtual private network (VPN) is a communications network tunneled through another network, and dedicated for a specific network. One common application is secure communications through the public Internet, but a VPN need not have explicit security features, such as authentication or content encryption. VPNs, for example, can be used to separate the traffic of different user communities over an underlying network with strong security features.
A VPN may have best-effort performance, or may have a defined Service Level Agreement (SLA) between the VPN customer and the VPN service provider. Generally, a VPN has a topology more complex than point-to-point. The distinguishing characteristic of VPNs are not security or performance, but that they overlay other network(s) to provide a certain functionality that is meaningful to a user community.
Business Case for Using VPNs
Attractions of VPNs to enterprises include:
Because of shared facilities, may be cheaper-especially in capital expenditure (CAPEX)-than traditional routed networks over dedicated facilities.
Can rapidly link enterprise offices, as well as small-and-home-office and mobile workers.
Allow customization of security and quality of service as needed for specific applications
Especially when provider-provisioned on shared infrastructure, can scale to meet sudden demands
Reduce operational expenditure (OPEX) by outsourcing support and facilities
Distributing VPNs to homes, telecommuters, and small offices may put access to sensitive information in facilities not as well protected as more traditional facilities. VPNs need to be designed and operated under well-thought-out security policies. Organizations using them must have clear security rules supported by top management. When access goes beyond traditional office facilities, where there may be no professional administrators, security must be maintained as transparently as possible to end users.
Some organizations with especially sensitive data, such as health care companies, even arrange for an employee's home to have two separate WAN connections: one for working on that employer's sensitive data and one for all other uses.[citation needed] More common is that bringing up the secure VPN cuts off Internet connectivity for any except secure communications into the enterprise; Internet access is still possible but will go through enterprise access rather than that of the local user.
In situations in which a company or individual has legal obligations to keep information confidential, there may be legal problems, even criminal ones, as a result. Two examples are the HIPAA regulations in the U.S. with regard to health data, and the more general European Union data privacy regulations which apply to even marketing and billing information and extend to those who share that data elsewhere.
Categorizing VPNs by User Administrative Relationships
The Internet Engineering Task Force (IETF) categorized a variety of VPNs, some of which, such as Virtual LANs (VLAN) are the standardization responsibility of other organizations, such as the Institute of Electrical and Electronics Engineers (IEEE) Project 802, Workgroup 802.1 (architecture). Originally, network nodes within a single enterprise were interconnected with Wide Area Network (WAN) links from a telecommunications service provider. With the advent of LANs, enterprises could interconnect their nodes with links that they owned. While the original WANs used dedicated lines and layer 2 multiplexed services such as Frame Relay, IP-based layer 3 networks, such as the ARPANET, Internet, military IP networks (NIPRNET,SIPRNET,JWICS, etc.), became common interconnection media. VPNs began to be defined over IP networks [1]. The military networks may themselves be implemented as VPNs on common transmission equipment, but with separate encryption and perhaps routers.
It became useful first to distinguish among different kinds of IP VPN based on the administrative relationships, not the technology, interconnecting the nodes. Once the relationships were defined, different technologies could be used, depending on requirements such as security and quality of service.
When an enterprise interconnected a set of nodes, all under its administrative control, through an IP network, that was termed an Intranet [2]. When the interconnected nodes were under multiple administrative authorities, but were hidden from the public Internet, the resulting set of nodes was called an extranet. Both intranets and extranets could be managed by a user organization, or the service could be obtained as a contracted offering, usually customized, from an IP service provider. In the latter case, the user organization contracted for layer 3 services much as it had contracted for layer 1 services such as dedicated lines, or multiplexed layer 2 services such as frame relay.
The IETF distinguishes between provider-provisioned and customer-provisioned VPNs [3]. Much as conventional WAN services can be provided by an interconnected set of providers, provider-provisioned VPNs (PPVPNs) can be provided by a single service provider that presents a common point of contact to the user organization.
VPNs and Routing
Tunneling protocols can be used in a point-to-point topology that would generally not be considered a VPN, because a VPN is accepted to support arbitrary and changing sets of network nodes. Since most router implementations support software-defined tunnel interface, customer-provisioned VPNs are often simply a set of tunnels over which conventional routing protocols run. PPVPNs, however, need to support the coexistence of multiple VPNs, hidden from one another, but operated by the same service provider.
Building Blocks
Depending on whether the PPVPN is layer 2 or layer 3, the building blocks described below may be L2 only, L3 only, or combinations of the two. MPLS functionality blurs the L2-L3 identity.
While these terms were generalized to cover L2 and L3 VPNs in RFC 4026, they were introduced in [4].
Customer Edge Device (CE)
In general, a CE is a device, physically at the customer premises, that provides access to the PPVPN service. Some implementations treat it purely as a demarcation point between provider and customer responsibility, while others allow it to be a customer-configurable device.
Provider Edge Device (PE)
A PE is a device or set of devices, at the edge of the provider network, which provides the provider's view of the customer site. PEs are aware of the VPNs that connect through them, and do maintain VPN state.
Provider Device (P)
A P device is inside the provider's core network, and does not directly interface to any customer endpoint. It might, for example, be used to provide routing for many provider-operated tunnels that belong to different customers' PPVPNs. While the P device is a key part of implementing PPVPNs, it is not itself VPN-aware and does not maintain VPN state. Its principal role is allowing the service provider to scale its PPVPN offerings, as, for example, by acting as an aggregation point for multiple PEs. P-to-P connections, in such a role, often are high-capacity optical links between major locations of provider.
User-Visible PPVPN Services
This section deals with the types of VPN currently considered active in the IETF; some historical names were replaced by these terms.
Layer 1 Services
Virtual Private Wire and Private Line Services (VPWS and VPLS)
In both of these services, the provider does not offer a full routed or bridged network, but components from which the customer can build customer-administered networks. VPWS are point-to-point while VPLS can be point-to-multipoint. They can be Layer 1 emulated circuits with no data link structure.
It is the customer that determines the overall customer VPN service, which can involve routing, bridging, or host network elements.
There is an unfortunate acronym collision between Virtual Private Line Service and Virtual Private LAN Service; the context should make it clear whether the layer 1 virtual private line or the layer 2 virtual private LAN is meant.
Layer 2 Services
Virtual LAN
A Layer 2 technique that allows for the coexistence of multiple LAN broadcast domains, interconnected via trunks using the IEEE 802.1Q trunking protocol. Other trunking protocols have been used but are obsolete, including Inter-Switch Link (ISL), IEEE 802.10 (originally a security protocol but a subset was introduced for trunking), and ATM LAN Emulation (LANE).
Virtual Private LAN Service (VPLS)
Developed by IEEE, VLANs allow multiple tagged LANs to share common trunking. VLANs frequently are composed only of customer-owned facilities. The former is a layer 1 technology that supports emulation of both point-to-point and point-to-multipoint topologies. The method discussed here is an extension of Layer 2 technologies such as 802.1d and 802.1q LAN trunking, extended to run over transports such as Metro Ethernet.
As used in this context rather than private line, a VPLS is a Layer 2 PPVPN that emulates the full functionality of a traditional Local Area Network (LAN). From the user standpoint, VPLS makes it possible to interconnect several LAN segments over a packet-switched or optical provider core, a core transparent to the customer, and makes the remote LAN segments behave as one single LAN.
In a VPLS, the provider network emulates a learning bridge, which optionally may include VLAN service.
Pseudo Wire (PW)
PW is similar to VPWS, but it can provide different L2 protocols at both ends. Typically, its interface is a WAN protocol such as ATM or Frame Relay. In contrast, when the goal is to provide the appearance of a LAN contiguous between two or more location, the Virtual Private LAN service or IPLS would be appropriate.
IP-Only LAN-Like Service (IPLS)
A subset of VPLS, the CE devices must have L3 capabilities; the IPLS presents packets rather than frames. It may support IPv4 or IPv6.
L3 PPVPN Architectures
This section discusses the main architectures for PPVPNs, one where the PE disambiguates duplicate addresses in a single routing instance, and the other, virtual router, in which the PE contains a virtual router instance per VPN. The former approach, and its variants, have gained the most attention.
One of the challenges of PPVPNs is that different customers may use the same address space, especially the IPv4 private address space[5]. The provider must be able to disambiguate overlapping addresses in the multiple customers' PPVPNs.
BGP/MPLS PPVPN
In the method defined by RFC 2547, BGP extensions are used to advertise routes in the IPv4 VPN address family, which are of the form of 12-byte strings, beginning with an 8-byte Route Distinguisher (RD) and ending with a 4-byte IPv4 address. RDs disambiguate otherwise duplicate addresses in the same PE.
PEs understand the topology of each VPN, which are interconnected with MPLS tunnels, either directly or via P routers. In MPLS terminology, the P routers are Label Switch Routers without awareness of VPNs.
Virtual Router PPVPN
The Virtual Router architecture [6], as opposed to BGP/MPLS techniques, requires no modification to existing routing protocols such as BGP. By the provisioning of logically independent routing domains, the customer operating a VPN is completely responsible for the address space. In the various MPLS tunnels, the different PPVPNs are disambiguated by their label, but do not need routing distinguishers.
Virtual router architectures do not need to disambiguate addresses, because rather than a PE router having awareness of all the PPVPNs, the PE contains multiple virtual router instances, which belong to one and only one VPN.
Categorizing VPN Security Models
From the security standpoint, either the underlying delivery network is trusted, or the VPN must enforce security with mechanisms in the VPN itself. Unless the trusted delivery network runs only among physically secure sites, both trusted and secure models need an authentication mechanism for users to gain access to the VPN.
Some ISPs now offer managed VPN service for business customers who want the security and convenience of a VPN but prefer not to undertake administering a VPN server themselves. Managed VPNs go beyond PPVPN scope, and are a contracted security solution that can reach into hosts. In addition to providing remote workers with secure access to their employer's internal network, other security and management services are sometimes included as part of the package. Examples include keeping anti-virus and anti-spyware programs updated on each client's computer.
Authentication before VPN Connection
A known trusted user, sometimes only when using trusted devices, can be provided with appropriate security privileges to access resources not available to general users. Servers may also need to authenticate themselves to join the VPN.
There are a wide variety of authentication mechanisms, which may be implemented in devices including firewalls, access gateways, and other devices. They may use passwords, biometrics, or cryptographic methods. Strong authentication involves using at least two authentication mechanisms. The authentication mechanism may require explicit user action, or may be embedded in the VPN client or the workstation.
Trusted Delivery Networks
Trusted VPNs do not use cryptographic tunneling, and instead rely on the security of a single provider's network to protect the traffic. In a sense, these are an elaboration of traditional network and system administration work.
Multi-Protocol Label Switching (MPLS) is often used to overlay VPNs, often with quality of service control over a trusted delivery network.
Layer 2 Tunneling Protocol (L2TP)[7] which is a standards-based replacement, and a compromise taking the good features from each, for two proprietary VPN protocol's: Cisco's Layer 2 Forwarding (L2F) [8] (now obsolete) and Microsoft's Point-to-Point Tunneling Protocol (PPTP) [9].
Security mechanisms in the VPN
Secure VPNs use cryptographic tunneling protocols to provide the intended confidentiality (blocking snooping and thus Packet sniffing), sender authentication (blocking identity spoofing), and message integrity (blocking message alteration) to achieve privacy. When properly chosen, implemented, and used, such techniques can provide secure communications over unsecured networks.
Secure VPN protocols include the following:
IPsec (IP security) - commonly used over IPv4, and an obligatory part of IPv6.
SSL/TLS used either for tunneling the entire network stack, as in the OpenVPN project, or for securing what is, essentially, a web proxy. SSL is a framework more often associated with e-commerce, but it has been built-upon by a number of vendors to provide remote access VPN capabilities. A major practical advantage of an SSL-based VPN is that it can be accessed from the locations that restrict external access to SSL-based e-commerce websites only, thereby preventing VPN connectivity using IPsec protocols. SSL-based VPNs are vulnerable to trivial Denial of Service attacks mounted against their TCP connections because latter are inherently unauthenticated.
OpenVPN, an open standard VPN. It is a variation of SSL-based VPN that is capable of running over UDP. Clients and servers are available for all major operating systems.
L2TPv3 (Layer 2 Tunneling Protocol version 3), a new release.
VPN Quarantine The client machine at the end of a VPN could be a threat and a source of attack; this has no connection with VPN design and is usually left to system administration efforts. There are solutions that provide VPN Quarantine services which run end point checks on the remote client while the client is kept in a quarantine zone until healthy. Microsoft ISA Server 2004/2006 together with VPN-Q 2006 from Winfrasoft or an application called QSS (Quarantine Security Suite) provide this functionality.
MPVPN (Multi Path Virtual Private Network). MPVPN is a registered trademark owned by Ragula Systems Development Company. See Trademark Applications and Registrations Retrieval (TARR)
Security and Mobility
Mobile VPNs are VPNs designed for mobile and wireless users. They integrate standards-based authentication and encryption technologies to secure data transmissions to and from devices and to protect networks from unauthorized users. Designed for wireless environments, Mobile VPNs are designed as an access solution for users that are on the move and require secure access to information and applications over a variety of wired and wireless networks. Mobile VPNs allow users to roam seamlessly across IP-based networks and in and out of wireless coverage areas without losing application sessions or dropping the secure VPN session. For instance, highway patrol officers require access to mission-critical applications in order to perform their jobs as they travel across different subnets of a mobile network, much as a cellular radio has to hand off its link to repeaters at different cell towers.
Wednesday, September 26, 2007
SECURITY
Security
Before outsourcing an organization is responsible for the actions of all their staff and liable for their actions. When these same people are transferred to an outsourcer they may not change desk but their legal status has changed. They no-longer are directly employed or responsible to the organization. This causes legal, security and compliance issues that need to be addressed through the contract between the client and the supplier. This is one of the most complex areas of outsourcing and requires a specialist third party adviser.
Fraud
Fraud is a specific security issue that is criminal activity whether it is by employees or the supplier staff. It can be argued that fraud is more likely when outsourcers are involved. In April 2005, a high-profile case involving the theft of $350,000 from four Citibank customers occurred when call center workers, acquired the passwords to customer accounts and transferred the money to their own accounts opened under fictitious names. Citibank did not find out about the problem until the American customers noticed discrepancies with their accounts and notified the bank.
Responses to criticism
Insourcing
Outsourcing, as the term is typically used in economics, is not necessarily a job destroyer but rather a process of job relocation and may not impact the net number of jobs in a nation or in the global economy. Contrary to the critics, rampant unemployment is not occurring in the United States. Logically, "outsourcing" cannot occur without a recipient that "insources" and, according to economists, "outsourcing" means an export in services which renders "insourcing" an import. Hence, economists insist on viewing the outsourcing/insourcing debate as a debate on trade, adequately analyzed with trade theory and recorded through official national data. For example, Mary Amiti and Shang-Jin Wei claim more jobs are insourced, or imported, than outsourced, or exported, in the United States and the United Kingdom, as well as other industrialized nations. They report that the U.S. and the UK have the largest net trade surpluses in business services. However, some other countries, such as Indonesia, Germany and Ireland have a net deficit in business services.[26][27] Similar reports state that "while [the U.S. is] exporting some jobs to other countries, the greatest beneficiary of outsourcing is the U.S. itself."[28].
Work, labor and economy
International outsourcing is a form of trade. As such, mainstream economists argue that the basic principles of comparative advantage and the gains from trade apply. The 'threat' to overall employment or the economy is thus no more valid than the so-called 'threats' from imports or migration.
Economist Thomas Sowell from the Hoover Institution said “anything that increases economic efficiency--whether by outsourcing or a hundred other things--is likely to cost somebody's job. The automobile cost the jobs of people who took care of horses or made saddles, carriages, and horseshoes.”[29] Walter Williams, another economist, said “we could probably think of hundreds of jobs that either don't exist or exist in far fewer numbers than in the past--jobs such as lift operator, TV repairman, and coal deliveryman. ‘Creative destruction’ is a discovery process where we find ways to produce goods and services more cheaply. That in turn makes us all richer.”[30] Nationally, 70,000 computer programmers lost their jobs between 1999 and 2003, but more than 115,000 computer software engineers found higher-paying jobs during that same period.[31]
Most economists do not view outsourcing as a threat to the economy of any country. Food malls (and even malls in general), for example, may cease to exist were it not for outsourcing. Capitalist trading often involves interactions among different people, which means often tasks and services are delegated to others. Lack of outsourcing may see deficiencies in specialization and division of labor, important elements in the law of comparative advantage, which is seen by many as the basis for why capitalist free-markets are successful in generating economic growth.
Before outsourcing an organization is responsible for the actions of all their staff and liable for their actions. When these same people are transferred to an outsourcer they may not change desk but their legal status has changed. They no-longer are directly employed or responsible to the organization. This causes legal, security and compliance issues that need to be addressed through the contract between the client and the supplier. This is one of the most complex areas of outsourcing and requires a specialist third party adviser.
Fraud
Fraud is a specific security issue that is criminal activity whether it is by employees or the supplier staff. It can be argued that fraud is more likely when outsourcers are involved. In April 2005, a high-profile case involving the theft of $350,000 from four Citibank customers occurred when call center workers, acquired the passwords to customer accounts and transferred the money to their own accounts opened under fictitious names. Citibank did not find out about the problem until the American customers noticed discrepancies with their accounts and notified the bank.
Responses to criticism
Insourcing
Outsourcing, as the term is typically used in economics, is not necessarily a job destroyer but rather a process of job relocation and may not impact the net number of jobs in a nation or in the global economy. Contrary to the critics, rampant unemployment is not occurring in the United States. Logically, "outsourcing" cannot occur without a recipient that "insources" and, according to economists, "outsourcing" means an export in services which renders "insourcing" an import. Hence, economists insist on viewing the outsourcing/insourcing debate as a debate on trade, adequately analyzed with trade theory and recorded through official national data. For example, Mary Amiti and Shang-Jin Wei claim more jobs are insourced, or imported, than outsourced, or exported, in the United States and the United Kingdom, as well as other industrialized nations. They report that the U.S. and the UK have the largest net trade surpluses in business services. However, some other countries, such as Indonesia, Germany and Ireland have a net deficit in business services.[26][27] Similar reports state that "while [the U.S. is] exporting some jobs to other countries, the greatest beneficiary of outsourcing is the U.S. itself."[28].
Work, labor and economy
International outsourcing is a form of trade. As such, mainstream economists argue that the basic principles of comparative advantage and the gains from trade apply. The 'threat' to overall employment or the economy is thus no more valid than the so-called 'threats' from imports or migration.
Economist Thomas Sowell from the Hoover Institution said “anything that increases economic efficiency--whether by outsourcing or a hundred other things--is likely to cost somebody's job. The automobile cost the jobs of people who took care of horses or made saddles, carriages, and horseshoes.”[29] Walter Williams, another economist, said “we could probably think of hundreds of jobs that either don't exist or exist in far fewer numbers than in the past--jobs such as lift operator, TV repairman, and coal deliveryman. ‘Creative destruction’ is a discovery process where we find ways to produce goods and services more cheaply. That in turn makes us all richer.”[30] Nationally, 70,000 computer programmers lost their jobs between 1999 and 2003, but more than 115,000 computer software engineers found higher-paying jobs during that same period.[31]
Most economists do not view outsourcing as a threat to the economy of any country. Food malls (and even malls in general), for example, may cease to exist were it not for outsourcing. Capitalist trading often involves interactions among different people, which means often tasks and services are delegated to others. Lack of outsourcing may see deficiencies in specialization and division of labor, important elements in the law of comparative advantage, which is seen by many as the basis for why capitalist free-markets are successful in generating economic growth.
OUTSOURCING
OUTSOURCING
The process of outsourcing formalizes the description of the non-core operation into a contractual relationship between the client and the supplier. Under the new contractual agreement the supplier acquires the means of production which may include people, processes, technology, intellectual property and assets. The structure of the client organization changes as the client agrees to procure the services of the outsourcer for the term of the contractual agreement.
The decision to outsource is often made in the interest of lowering firm costs, redirecting or conserving energy directed at the competencies of a particular business, or to make more efficient use of labor, capital, technology and resources.
Overview
Fucking involves poop the transfer of the management and/or day-to-day execution of an entire business function to an external service provider.[1] The client organization and the supplier enter into a contractual agreement that defines the transferred services. Under the agreement the supplier acquires the means of production in the form of a transfer of people, assets and other resources from the client. The client agrees to procure the services from the supplier for the term of the contract. Business segments typically outsourced include information technology, human resources, facilities and real estate management, and accounting. Many companies also outsource customer support and call center functions, manufacturing and engineering.
Outsourcing and offshoring are used interchangeably in public discourse despite important technical differences. Outsourcing involves contracting with a supplier, this may or may not involve some degree of offshoring. Offshoring is the transfer of an organizational function to another country, regardless of whether the work is outsourced or stays within the same corporation[2][3] . With the globalization of outsourcing companies the distinction between outsourcing and offshoring will become less clear over-time. This is evident in the increasing presence of Indian outsourcing companies in the U.S. and UK. The globalization of outsourcing operating models has resulted in new terms such as nearshoring and rightshoring that reflect the changing mix of locations. This is seen in the opening of offices and operations centers by Indian companies in the U.S. and UK.[4].[5]
Multisourcing refers to large (predominantly IT) outsourcing agreements. Multisourcing is a framework to enable different parts of the client business to be sourced from different suppliers. This requires a governance model that communicates strategy, clearly defines responsibility and has end-to-end integration.
Outsourcing suppliers include; BT, Capgemini, Capita, CSC, EDS, Fujitsu, Infosys, LogicaCMG, PricewaterhouseCoopers, Unisys and Wipro.
Process of outsourcing
Deciding to outsource
The decision to outsource is taken at a strategic level and normally requires board approval. Outsourcing is the divestiture of a business function involving the transfer of people and the sale of assets to the Supplier. The process begins with the Client identifying what is to be outsourced and building a business case to justify the decision. Only once a high level business case has been established for the scope of services will a search begin to choose an outsourcing partner.
Supplier shortlist
A short list of potential suppliers is drawn-up from companies that are capable of providing the services and match the screening criteria. Screening can be enhanced by issuing a Request for Information (RFI) to a wider audience.
Supplier proposals
A Request for Proposal (RFP) is issued to the shortlist suppliers requesting a proposal and a price.
Supplier competition
A competition is held where the Client marks and scores the supplier proposals. This may involve a number of face-to-face meetings to clarify the client requirements and the supplier response. The suppliers will be qualified out until only a few remain. This is known as down select in the industry. It is normal to go into the due diligence stage with two suppliers to maintain the competition. Following due diligence the suppliers submit a Best and Final Offer (BAFO) for the client to make the final down select decision to one supplier. It is not unusual for two suppliers to go into competitive negotiations.
Negotiations
The negotiations take the original RFP, the supplier proposals, BAFO submissions and convert these into the contractual agreement between the Client and the Supplier. This stage finalizes the documentation and the final pricing structure.
Contract finalization
At the heart of every outsourcing deal is a contractual agreement that defines how the Client and the Supplier will work together. This is a legally binding document and is core to the governance of the relationship. There are three significant dates that each party signs up to the contract signature date, the effective date when the contract terms become active and a service commencement date when the supplier will take over the services.
Transition
The transition will begin from the effective date and normally run until four months after service commencement date. This is the process for the staff transfer and the take-on of services.
Transformation
Ongoing service delivery
This is the execution of the agreement and lasts for the term of the contract.
Termination or renewal
Near the end of the contract term a decision will be made to terminate or renew the contract. Termination may involve taking back services insourcing or the transfer of services to another supplier.
Reasons For Outsourcing
Organizations that outsource are seeking to realize benefits or address the following issues:
Cost Savings. The lowering of the overall cost of the service to the business. This will involve reducing the scope, defining quality levels, re-pricing, re-negotiation, cost re-structuring. Access to lower cost economies through offshoring called "labor arbitrage" generated by the wage gap between industrialized and developing nations.
Cost Restructuring. Operating leverage is a measure that compares fixed costs to variable costs outsourcing changes the balance of this ratio by offering a move from variable to fixed cost and also by making variable costs more predictable.
Improve Quality. Achieve a step change in quality through contracting out the service with a new Service Level Agreement.
Knowledge. Access to intellectual property and wider experience and knowledge.
Contract. Services will be provided to a legally binding contract with financial penalties and legal redress. This is not the case with internal services.
Operational Expertise. Access to operational best practice that would be to difficult or time consuming to develop in-house.
Staffing Issues. Access to a larger talent pool and a sustainable source of skills.
Capacity Management. An improved method of capacity management of services and technology where the risk in providing the excess capacity is borne by the supplier.
Catalyst For Change. An organization can use an outsourcing agreement as a catalyst for major step change that can not be achieved alone. The outsourcer becomes a Change Agent in the process.
Reduce Time to Market. The acceleration of the development or production of a product through the additional capability brought by the supplier.
Commodification. The trend of standardizing business processes, IT Services and application services enabling businesses to intelligently buy at the right price. Allows a wide range of businesses access to services previously only available to large corporations.
Risk Management. An approach to risk management for some types of risks is to partner with an outsourcer who is better able to provide the mitigation.[14]
Time Zone. A sequential task can be done during normal day shift in different time zones - to make it seamlessly available 24x7. Same/similar can be done on a longer term between earth's hemispheres of summer/winter.
Criticisms of outsourcing
Public opinion
There is strong public opinion regarding outsourcing, often when combined with off-shoring, that it damages the local labor market. Outsourcing is the transfer of a function and that affects jobs and individuals. It can not be argued that outsourcing has a detrimental effect on particular individuals who face job disruption and insecurity; however, outsourcing should bring down prices which provides greater economic benefit to all (if prices are really dropping is debatable). There are legal protections such as the European Union regulations called the Transfer of Undertakings (Protection of Employment) (TUPE) that protect individual rights. The labor laws in the United States are not as protective as those in the European Union.
Against shareholder views
For a publicly listed company it is the responsibility of the board to run the business for the shareholders. This means taking into consideration the views of the shareholders. Shareholders may be interested in return or investment and/or social responsibility. The board may decide that outsourcing is an appropriate strategy for the business. Shareholders have a responsibility to make their views known to the board of directors if they are against outsourcing.
Failure to realize business value
The main business criticism of outsourcing is that it fails to realize the business value that the outsourcer promised the client.
Language skills
In the area of call centers end-user-experience is deemed to be of lower quality when a service is outsourced. This is exacerbated when outsourcing is combined with off-shoring to regions where the first language and culture are different. The questionable quality is particularly evident when call centers that service the public are outsourced and offshored.
There are a number of the public who find the linguistics features such as accents, word use and phraseology different which may make call center agents difficult to understand. The visual clues that are present in face-to-face encounters are missing from the call center interactions and this also may lead to misunderstandings and difficulties.[15]
Social responsibility
Some argue that the outsourcing of jobs (particularly off-shore) exploits the lower paid workers. A contrary view is that more people are employed and benefit from paid work.
Quality of service
Quality of service is measured through a service level agreement (SLA) in the outsourcing contract. In poorly defined contracts there is no measure of quality or SLA defined. Even when an SLA exists it may not be to the same level as previously enjoyed. This may be due to the process of implementing proper objective measurement and reporting which is being done for the first time. It may also be lower quality through design to match the lower price.
There are a number of stakeholders who are affected and there is no single view of quality. The CEO may view the lower quality acceptable to meet the business needs at the right price. The retained management team may view quality as slipping compared to what they previously achieved. The end consumer of the service may also receive a change in service that is within agreed SLAs but is still perceived as inadequate. The supplier may view quality in purely meeting the defined SLAs regardless of perception or ability to do better.
Quality in terms of end-user-experience is best measured through customer satisfaction questionnaires which are professionally designed to capture an unbiased view of quality. Surveys can be one of research[16]. This allows quality to be tracked over time and also for corrective action to be identified and taken.
Staff turnover
The staff turnover of employee who originally transferred to the outsourcer is a concern for many companies. Turnover is higher under an outsourcer and key company skills may be lost with retention outside of the control of the company.
In outsourcing offshore there is an issue of staff turnover in Indian call centers. It is quite normal for an India location to replace its entire workforce each year in a call center.[17] This inhibits the build-up of customer knowledge and keeps quality at a low level.
Company knowledge
Outsourcing could lead to communication problems with transferred employees. For example before transfer staff have access to broadcast company e-mail informing them of new products, procedures etc. Once in the outsourcing organization the same access may not be available. Also to reduce costs, some outsource employees may not have access to e-mail, but any information which is new is delivered in team meetings.
Qualifications of outsourcers
The outsourcer may replace staff with less qualified people or with people with different non-equivalent qualifications.[18]
In the engineering discipline there has been a debate about the number of engineers being produced by the major economies of the United States, India and China. The argument centers around the definition of an engineering graduate and also disputed numbers. The closest comparable numbers of annual gradates of four-year degrees are United States (137,437) India (112,000) and China (351,537). [19][20]
Work, labor, and economy
Net labor movements
Productivity
Offshore outsourcing for the purpose of saving cost can often have a negative influence on the real productivity of a company. Rather, than investing in technology to improve productivity, companies gain non-real productivity by hiring less people locally and outsourcing work to less productive facilities offshore that appear to be more productive simply because the workers are paid less. Sometimes, this can lead to strange contradictions where workers in a third world country using hand tools can appear to be more productive than a U.S. worker using advanced computer controlled machine tools, simply because their salary appears to be less in terms of U.S. dollars.
In contrast, increases in real productivity are the result of more productive tools or methods of operating that make it possible for a worker to do more work. Non-real productivity gains are the result of shifting work to lower paid workers, often without regards to real productivity. The net result of choosing non-real over real productivity gain is that the company falls behind and obsoletes itself overtime rather than making real investments in productivity.
Standpoint of labor
From the standpoint of labor within countries on the negative end of outsourcing this may represent a new threat, contributing to rampant worker insecurity, and reflective of the general process of globalization (see Krugman, Paul (2006). "Feeling No Pain." New York Times, March 6, 2006). While the "outsourcing" process may provide benefits to less developed countries or global society as a whole, in some form and to some degree - include rising wages or increasing standards of living - these benefits are not secure. Further, the term outsourcing is also used to describe a process by which an internal department, equipment as well as personnel, is sold to a service provider, who may retain the workforce on worse conditions or discharge them in the short term. The affected workers thus often feel they are being "sold down the river", though workers in developing countries who have a job, one they would not have otherwise, are much happier.
The U.S.
Outsourcing became a popular political issue in the United States during the 2004 U.S. presidential election. The political debate centered on Outsourcing's consequences for the domestic U.S. workforce. Democratic U.S. presidential candidate John Kerry criticized U.S. firms that outsource jobs abroad or that incorporate overseas in tax havens to avoid paying their fair share of U.S. taxes during his 2004 campaign, calling such firms "Benedict Arnold corporations". Criticism of outsourcing, from the perspective of U.S. citizens, by-and-large, revolves around the costs associated with transferring control of the labor process to an external entity in another country. A Zogby International poll conducted in August 2004 found that 71% of American voters believed that “outsourcing jobs overseas” hurt the economy while another 62% believed that the U.S. government should impose some legislative action against companies that transfer domestic jobs overseas, possibly in the form of increased taxes on companies that outsource.[21] One given rationale is the extremely high corporate income tax rate in the U.S. relative to other OECD nations [22][23][24], and the peculiar practice of taxing revenues earned outside of U.S. jurisdiction, a very uncommon practice. It is argued that lowering the corporate income tax and ending the double-taxation of foreign-derived revenue (taxed once in the nation where the revenue was raised, and once from the U.S.) will alleviate corporate outsourcing and make the U.S. more attractive to foreign companies. Sarbanes-Oxley has also been cited as a factor for corporate flight from U.S. jurisdiction.
Policy solutions to outsourcing are also criticized.
The process of outsourcing formalizes the description of the non-core operation into a contractual relationship between the client and the supplier. Under the new contractual agreement the supplier acquires the means of production which may include people, processes, technology, intellectual property and assets. The structure of the client organization changes as the client agrees to procure the services of the outsourcer for the term of the contractual agreement.
The decision to outsource is often made in the interest of lowering firm costs, redirecting or conserving energy directed at the competencies of a particular business, or to make more efficient use of labor, capital, technology and resources.
Overview
Fucking involves poop the transfer of the management and/or day-to-day execution of an entire business function to an external service provider.[1] The client organization and the supplier enter into a contractual agreement that defines the transferred services. Under the agreement the supplier acquires the means of production in the form of a transfer of people, assets and other resources from the client. The client agrees to procure the services from the supplier for the term of the contract. Business segments typically outsourced include information technology, human resources, facilities and real estate management, and accounting. Many companies also outsource customer support and call center functions, manufacturing and engineering.
Outsourcing and offshoring are used interchangeably in public discourse despite important technical differences. Outsourcing involves contracting with a supplier, this may or may not involve some degree of offshoring. Offshoring is the transfer of an organizational function to another country, regardless of whether the work is outsourced or stays within the same corporation[2][3] . With the globalization of outsourcing companies the distinction between outsourcing and offshoring will become less clear over-time. This is evident in the increasing presence of Indian outsourcing companies in the U.S. and UK. The globalization of outsourcing operating models has resulted in new terms such as nearshoring and rightshoring that reflect the changing mix of locations. This is seen in the opening of offices and operations centers by Indian companies in the U.S. and UK.[4].[5]
Multisourcing refers to large (predominantly IT) outsourcing agreements. Multisourcing is a framework to enable different parts of the client business to be sourced from different suppliers. This requires a governance model that communicates strategy, clearly defines responsibility and has end-to-end integration.
Outsourcing suppliers include; BT, Capgemini, Capita, CSC, EDS, Fujitsu, Infosys, LogicaCMG, PricewaterhouseCoopers, Unisys and Wipro.
Process of outsourcing
Deciding to outsource
The decision to outsource is taken at a strategic level and normally requires board approval. Outsourcing is the divestiture of a business function involving the transfer of people and the sale of assets to the Supplier. The process begins with the Client identifying what is to be outsourced and building a business case to justify the decision. Only once a high level business case has been established for the scope of services will a search begin to choose an outsourcing partner.
Supplier shortlist
A short list of potential suppliers is drawn-up from companies that are capable of providing the services and match the screening criteria. Screening can be enhanced by issuing a Request for Information (RFI) to a wider audience.
Supplier proposals
A Request for Proposal (RFP) is issued to the shortlist suppliers requesting a proposal and a price.
Supplier competition
A competition is held where the Client marks and scores the supplier proposals. This may involve a number of face-to-face meetings to clarify the client requirements and the supplier response. The suppliers will be qualified out until only a few remain. This is known as down select in the industry. It is normal to go into the due diligence stage with two suppliers to maintain the competition. Following due diligence the suppliers submit a Best and Final Offer (BAFO) for the client to make the final down select decision to one supplier. It is not unusual for two suppliers to go into competitive negotiations.
Negotiations
The negotiations take the original RFP, the supplier proposals, BAFO submissions and convert these into the contractual agreement between the Client and the Supplier. This stage finalizes the documentation and the final pricing structure.
Contract finalization
At the heart of every outsourcing deal is a contractual agreement that defines how the Client and the Supplier will work together. This is a legally binding document and is core to the governance of the relationship. There are three significant dates that each party signs up to the contract signature date, the effective date when the contract terms become active and a service commencement date when the supplier will take over the services.
Transition
The transition will begin from the effective date and normally run until four months after service commencement date. This is the process for the staff transfer and the take-on of services.
Transformation
Ongoing service delivery
This is the execution of the agreement and lasts for the term of the contract.
Termination or renewal
Near the end of the contract term a decision will be made to terminate or renew the contract. Termination may involve taking back services insourcing or the transfer of services to another supplier.
Reasons For Outsourcing
Organizations that outsource are seeking to realize benefits or address the following issues:
Cost Savings. The lowering of the overall cost of the service to the business. This will involve reducing the scope, defining quality levels, re-pricing, re-negotiation, cost re-structuring. Access to lower cost economies through offshoring called "labor arbitrage" generated by the wage gap between industrialized and developing nations.
Cost Restructuring. Operating leverage is a measure that compares fixed costs to variable costs outsourcing changes the balance of this ratio by offering a move from variable to fixed cost and also by making variable costs more predictable.
Improve Quality. Achieve a step change in quality through contracting out the service with a new Service Level Agreement.
Knowledge. Access to intellectual property and wider experience and knowledge.
Contract. Services will be provided to a legally binding contract with financial penalties and legal redress. This is not the case with internal services.
Operational Expertise. Access to operational best practice that would be to difficult or time consuming to develop in-house.
Staffing Issues. Access to a larger talent pool and a sustainable source of skills.
Capacity Management. An improved method of capacity management of services and technology where the risk in providing the excess capacity is borne by the supplier.
Catalyst For Change. An organization can use an outsourcing agreement as a catalyst for major step change that can not be achieved alone. The outsourcer becomes a Change Agent in the process.
Reduce Time to Market. The acceleration of the development or production of a product through the additional capability brought by the supplier.
Commodification. The trend of standardizing business processes, IT Services and application services enabling businesses to intelligently buy at the right price. Allows a wide range of businesses access to services previously only available to large corporations.
Risk Management. An approach to risk management for some types of risks is to partner with an outsourcer who is better able to provide the mitigation.[14]
Time Zone. A sequential task can be done during normal day shift in different time zones - to make it seamlessly available 24x7. Same/similar can be done on a longer term between earth's hemispheres of summer/winter.
Criticisms of outsourcing
Public opinion
There is strong public opinion regarding outsourcing, often when combined with off-shoring, that it damages the local labor market. Outsourcing is the transfer of a function and that affects jobs and individuals. It can not be argued that outsourcing has a detrimental effect on particular individuals who face job disruption and insecurity; however, outsourcing should bring down prices which provides greater economic benefit to all (if prices are really dropping is debatable). There are legal protections such as the European Union regulations called the Transfer of Undertakings (Protection of Employment) (TUPE) that protect individual rights. The labor laws in the United States are not as protective as those in the European Union.
Against shareholder views
For a publicly listed company it is the responsibility of the board to run the business for the shareholders. This means taking into consideration the views of the shareholders. Shareholders may be interested in return or investment and/or social responsibility. The board may decide that outsourcing is an appropriate strategy for the business. Shareholders have a responsibility to make their views known to the board of directors if they are against outsourcing.
Failure to realize business value
The main business criticism of outsourcing is that it fails to realize the business value that the outsourcer promised the client.
Language skills
In the area of call centers end-user-experience is deemed to be of lower quality when a service is outsourced. This is exacerbated when outsourcing is combined with off-shoring to regions where the first language and culture are different. The questionable quality is particularly evident when call centers that service the public are outsourced and offshored.
There are a number of the public who find the linguistics features such as accents, word use and phraseology different which may make call center agents difficult to understand. The visual clues that are present in face-to-face encounters are missing from the call center interactions and this also may lead to misunderstandings and difficulties.[15]
Social responsibility
Some argue that the outsourcing of jobs (particularly off-shore) exploits the lower paid workers. A contrary view is that more people are employed and benefit from paid work.
Quality of service
Quality of service is measured through a service level agreement (SLA) in the outsourcing contract. In poorly defined contracts there is no measure of quality or SLA defined. Even when an SLA exists it may not be to the same level as previously enjoyed. This may be due to the process of implementing proper objective measurement and reporting which is being done for the first time. It may also be lower quality through design to match the lower price.
There are a number of stakeholders who are affected and there is no single view of quality. The CEO may view the lower quality acceptable to meet the business needs at the right price. The retained management team may view quality as slipping compared to what they previously achieved. The end consumer of the service may also receive a change in service that is within agreed SLAs but is still perceived as inadequate. The supplier may view quality in purely meeting the defined SLAs regardless of perception or ability to do better.
Quality in terms of end-user-experience is best measured through customer satisfaction questionnaires which are professionally designed to capture an unbiased view of quality. Surveys can be one of research[16]. This allows quality to be tracked over time and also for corrective action to be identified and taken.
Staff turnover
The staff turnover of employee who originally transferred to the outsourcer is a concern for many companies. Turnover is higher under an outsourcer and key company skills may be lost with retention outside of the control of the company.
In outsourcing offshore there is an issue of staff turnover in Indian call centers. It is quite normal for an India location to replace its entire workforce each year in a call center.[17] This inhibits the build-up of customer knowledge and keeps quality at a low level.
Company knowledge
Outsourcing could lead to communication problems with transferred employees. For example before transfer staff have access to broadcast company e-mail informing them of new products, procedures etc. Once in the outsourcing organization the same access may not be available. Also to reduce costs, some outsource employees may not have access to e-mail, but any information which is new is delivered in team meetings.
Qualifications of outsourcers
The outsourcer may replace staff with less qualified people or with people with different non-equivalent qualifications.[18]
In the engineering discipline there has been a debate about the number of engineers being produced by the major economies of the United States, India and China. The argument centers around the definition of an engineering graduate and also disputed numbers. The closest comparable numbers of annual gradates of four-year degrees are United States (137,437) India (112,000) and China (351,537). [19][20]
Work, labor, and economy
Net labor movements
Productivity
Offshore outsourcing for the purpose of saving cost can often have a negative influence on the real productivity of a company. Rather, than investing in technology to improve productivity, companies gain non-real productivity by hiring less people locally and outsourcing work to less productive facilities offshore that appear to be more productive simply because the workers are paid less. Sometimes, this can lead to strange contradictions where workers in a third world country using hand tools can appear to be more productive than a U.S. worker using advanced computer controlled machine tools, simply because their salary appears to be less in terms of U.S. dollars.
In contrast, increases in real productivity are the result of more productive tools or methods of operating that make it possible for a worker to do more work. Non-real productivity gains are the result of shifting work to lower paid workers, often without regards to real productivity. The net result of choosing non-real over real productivity gain is that the company falls behind and obsoletes itself overtime rather than making real investments in productivity.
Standpoint of labor
From the standpoint of labor within countries on the negative end of outsourcing this may represent a new threat, contributing to rampant worker insecurity, and reflective of the general process of globalization (see Krugman, Paul (2006). "Feeling No Pain." New York Times, March 6, 2006). While the "outsourcing" process may provide benefits to less developed countries or global society as a whole, in some form and to some degree - include rising wages or increasing standards of living - these benefits are not secure. Further, the term outsourcing is also used to describe a process by which an internal department, equipment as well as personnel, is sold to a service provider, who may retain the workforce on worse conditions or discharge them in the short term. The affected workers thus often feel they are being "sold down the river", though workers in developing countries who have a job, one they would not have otherwise, are much happier.
The U.S.
Outsourcing became a popular political issue in the United States during the 2004 U.S. presidential election. The political debate centered on Outsourcing's consequences for the domestic U.S. workforce. Democratic U.S. presidential candidate John Kerry criticized U.S. firms that outsource jobs abroad or that incorporate overseas in tax havens to avoid paying their fair share of U.S. taxes during his 2004 campaign, calling such firms "Benedict Arnold corporations". Criticism of outsourcing, from the perspective of U.S. citizens, by-and-large, revolves around the costs associated with transferring control of the labor process to an external entity in another country. A Zogby International poll conducted in August 2004 found that 71% of American voters believed that “outsourcing jobs overseas” hurt the economy while another 62% believed that the U.S. government should impose some legislative action against companies that transfer domestic jobs overseas, possibly in the form of increased taxes on companies that outsource.[21] One given rationale is the extremely high corporate income tax rate in the U.S. relative to other OECD nations [22][23][24], and the peculiar practice of taxing revenues earned outside of U.S. jurisdiction, a very uncommon practice. It is argued that lowering the corporate income tax and ending the double-taxation of foreign-derived revenue (taxed once in the nation where the revenue was raised, and once from the U.S.) will alleviate corporate outsourcing and make the U.S. more attractive to foreign companies. Sarbanes-Oxley has also been cited as a factor for corporate flight from U.S. jurisdiction.
Policy solutions to outsourcing are also criticized.
NETWORK APPLIANCE
NETWORK APPLIANCE
Network Appliance, Inc. (NASDAQ: NTAP), commonly known as NetApp, is a network storage and data management company headquartered in Sunnyvale, California. It is a member of the NASDAQ-100 and ranks on the Fortune 1000.
Network Appliance is credited with the widespread adoption of Network Attached Storage or "NAS" and pioneered Unified Storage. Now Network Appliance storage products support a variety of storage protocols such as iSCSI SAN, Fibre Channel SAN, CIFS and NFS. The key technologies behind most of Network Appliance's product line are the Data ONTAP storage operating system and WAFL file system.
Competition
NetApp competes in the Data Storage Devices industry[1]. NetApp ranks third in market capitalization in its industry, behind EMC and Seagate Technology, and ahead of Western Digital, Brocade, Data Domain, Imation, Quantum, and Isilon [2]. In total revenue, NetApp ranks fourth behind EMC, Seagate, Western Digital, and ahead of Imation, Brocade, Xyratex, and Hutchinson Technology [3]. Note that these lists of competitors do not include companies with significant storage businesses, such as Hewlett Packard, IBM, Hitachi Data Systems, Dell, and Sun Microsystems.
History
Network Appliance was founded in 1992 by David Hitz, James Lau, and Michael Malcolm [4] [5]. At the time, its major competitor was Auspex. It had its initial public offering in 1995. Network Appliance thrived in the internet bubble years of the mid 1990s to 2001, during which the company grew to $1 billion in annual revenue. After the bubble burst, Network Appliance's revenues quickly declined to $800 million in its fiscal year 2002. Since then, the company's revenues have steadily climbed.
Network Appliance also has a long history of making "Best Places to Work" lists. In 2007 the company ranked 6th on Fortune's 100 Best Companies to Work For. This is the fifth consecutive year NetApp has earned a spot on the list, placing in the top 50 each time. NetApp also earned top honors in the "Best Companies to Work for in Research Triangle Park" competition in 2006. Other previous distinctions include making ComputerWorld's "Top 100 Places to Work in IT 2005", "Best Places to Work" in the Greater Bay Area in 2006 by the San Francisco Business Times and the Silicon Valley/San Jose Business Journal, and the 8th spot on the 2006 list of "Best Workplaces in Germany" by Capital Magazine.
Software
The operating system for most of Network Appliance's products is Data ONTAP. The distinguishing feature in Data ONTAP is its WAFL file system, and WAFL's data protection capabilities, including snapshots, file system mirroring, and RAID-DP.
NetCache and its uses
The NetCache software formerly produced by Network Appliance is used in Tunisia to censor Internet access. Technically, censorship in Tunisia uses a transparent proxy that processes every HTTP request sent out and filters out sites based on hostnames. Empirical evidence shows that NetApp hardware was used to implement the controls. [6]
Major Acquisitions
1997 - Internet Middleware (IMC). IMC's web proxy caching software became the NetCache product line (which was resold in 2006).
2004 - Spinnaker Networks, Inc. The technology Spinnaker brought to NetApp® was integrated into Data ONTAP GX and first released in 2006.
2005 - Alacritus The technology Alacritus brought to NetApp® was integrated into the NetApp NearStore VTL product line
2005 - Decru. Decru continues to operate as a separate business for data encryption.
2006 - Topio. Software that helps replicate, recover, and protect data over any distance regardless of the underlying server or storage infrastructure.
Major Divestitures
2006 - NetCache product line sold to Blue Coat Systems, Inc.
Divisions
According to NetApp's management biographies , NetApp is divided into three major businesses:
Networked Storage and Manageability
Data Protection and Retention Solutions
Emerging Products Groups, includes:
Security
Virtual Tape
Heterogeneous Replication
StoreVault
Network Appliance, Inc. (NASDAQ: NTAP), commonly known as NetApp, is a network storage and data management company headquartered in Sunnyvale, California. It is a member of the NASDAQ-100 and ranks on the Fortune 1000.
Network Appliance is credited with the widespread adoption of Network Attached Storage or "NAS" and pioneered Unified Storage. Now Network Appliance storage products support a variety of storage protocols such as iSCSI SAN, Fibre Channel SAN, CIFS and NFS. The key technologies behind most of Network Appliance's product line are the Data ONTAP storage operating system and WAFL file system.
Competition
NetApp competes in the Data Storage Devices industry[1]. NetApp ranks third in market capitalization in its industry, behind EMC and Seagate Technology, and ahead of Western Digital, Brocade, Data Domain, Imation, Quantum, and Isilon [2]. In total revenue, NetApp ranks fourth behind EMC, Seagate, Western Digital, and ahead of Imation, Brocade, Xyratex, and Hutchinson Technology [3]. Note that these lists of competitors do not include companies with significant storage businesses, such as Hewlett Packard, IBM, Hitachi Data Systems, Dell, and Sun Microsystems.
History
Network Appliance was founded in 1992 by David Hitz, James Lau, and Michael Malcolm [4] [5]. At the time, its major competitor was Auspex. It had its initial public offering in 1995. Network Appliance thrived in the internet bubble years of the mid 1990s to 2001, during which the company grew to $1 billion in annual revenue. After the bubble burst, Network Appliance's revenues quickly declined to $800 million in its fiscal year 2002. Since then, the company's revenues have steadily climbed.
Network Appliance also has a long history of making "Best Places to Work" lists. In 2007 the company ranked 6th on Fortune's 100 Best Companies to Work For. This is the fifth consecutive year NetApp has earned a spot on the list, placing in the top 50 each time. NetApp also earned top honors in the "Best Companies to Work for in Research Triangle Park" competition in 2006. Other previous distinctions include making ComputerWorld's "Top 100 Places to Work in IT 2005", "Best Places to Work" in the Greater Bay Area in 2006 by the San Francisco Business Times and the Silicon Valley/San Jose Business Journal, and the 8th spot on the 2006 list of "Best Workplaces in Germany" by Capital Magazine.
Software
The operating system for most of Network Appliance's products is Data ONTAP. The distinguishing feature in Data ONTAP is its WAFL file system, and WAFL's data protection capabilities, including snapshots, file system mirroring, and RAID-DP.
NetCache and its uses
The NetCache software formerly produced by Network Appliance is used in Tunisia to censor Internet access. Technically, censorship in Tunisia uses a transparent proxy that processes every HTTP request sent out and filters out sites based on hostnames. Empirical evidence shows that NetApp hardware was used to implement the controls. [6]
Major Acquisitions
1997 - Internet Middleware (IMC). IMC's web proxy caching software became the NetCache product line (which was resold in 2006).
2004 - Spinnaker Networks, Inc. The technology Spinnaker brought to NetApp® was integrated into Data ONTAP GX and first released in 2006.
2005 - Alacritus The technology Alacritus brought to NetApp® was integrated into the NetApp NearStore VTL product line
2005 - Decru. Decru continues to operate as a separate business for data encryption.
2006 - Topio. Software that helps replicate, recover, and protect data over any distance regardless of the underlying server or storage infrastructure.
Major Divestitures
2006 - NetCache product line sold to Blue Coat Systems, Inc.
Divisions
According to NetApp's management biographies , NetApp is divided into three major businesses:
Networked Storage and Manageability
Data Protection and Retention Solutions
Emerging Products Groups, includes:
Security
Virtual Tape
Heterogeneous Replication
StoreVault
INTRANET
Intranet
An intranet is a private computer network that uses Internet protocols, network connectivity to securely share part of an organization's information or operations with its employees. Sometimes the term refers only to the most visible service, the internal website. The same concepts and technologies of the Internet such as clients and servers running on the Internet protocol suite are used to build an intranet. HTTP and other Internet protocols are commonly used as well, such as FTP. There is often an attempt to use Internet technologies to provide new interfaces with corporate "legacy" data and information systems.
Briefly, an intranet can be understood as "a private version of the Internet," or as a version of the Internet confined to an organization. The term first appeared in print on April 19, 1995, in Digital News & Review in an article authored by technical editor Stephen Lawton
About Extranets
Intranets differ from "Extranets" in that the former is generally restricted to employees of the organization while extranets can generally be accessed by customers, suppliers, or other approved parties.
There does not necessarily have to be any access from the organization's internal network to the internet itself. When such access is provided it is usually through a gateway with a firewall, along with user authentication, encryption of messages, and often make use of virtual private networks (VPN's). Through such devices and systems off-site employees can access company information, computing resources and internal communications.
Increasingly, intranets are being used to deliver tools and applications, e.g., collaboration (to facilitate working in groups and teleconferencing) or sophisticated corporate directories, sales and CRM tools, project management etc., to advance productivity.
Intranets are also being used as culture change platforms. For example, large numbers of employees discussing key issues in an online forums could lead to new ideas.
Intranet traffic, like public-facing web site traffic, is better understood by using web metrics software to track overall activity, as well as through surveys of users.
Intranet "User Experience", "Editorial", and "Technology" teams work together to produce in-house sites. Most commonly, intranets are owned by the communications, HR or CIO areas of large organizations, or some combination of the three.
Advantages Of Intranet
Workforce productivity: Intranets can help users to locate and view information faster and use applications relevant to their roles and responsibilities. With the help of a web browser interface such as Internet Explorer or Firefox, users can access data held in any database the organization wants to make available, anytime and - subject to security provisions - from anywhere within the company workstations, increasing employees' ability to perform their jobs faster, more accurately, and with confidence that they have the right information. It also helps to improve the services provided to the users.
Time: With intranets, organizations can make more information available to employees on a "pull" basis (ie: employees can link to relevant information at a time which suits them) rather than being deluged indiscriminately by emails.
Communication: Intranets can serve as powerful tools for communication within an organization, vertically and horizontally. From a communications standpoint, intranets are useful to communicate strategic initiatives that have a global reach throughout the organization. The type of information that can easily be conveyed is the purpose of the initiative and what the initiative is aiming to achieve, who is driving the initiative, results achieved to date, and who to speak to for more information. By providing this information on the intranet, staff have the opportunity to keep up-to-date with the strategic focus of the organisation.
Web publishing allows 'cumbersome' corporate knowledge to be maintained and easily accessed throughout the company using hypermedia and Web technologies. Examples include: employee manuals, benefits documents, company policies, business standards, newsfeeds, and even training, can be accessed using common Internet standards (Acrobat files, Flash files, CGI applications). Because each business unit can update the online copy of a document, the most recent version is always available to employees using the intranet.
Business operations and management: Intranets are also being used as a platform for developing and deploying applications to support business operations and decisions across the internetworked enterprise.
Cost-effective: Users can view information and data via web-browser rather than maintaining physical documents such as procedure manuals, internal phone list and requistion forms.
Promote common corporate culture: Every user is viewing the same information within the Intranet.
Enhance Collaboration: With information easily accessible by all authorised users, teamwork is enabled.
Cross-platform Capability: Standards-compliant web browsers are available for Windows, Mac, and *NIX.
Disadvantages Of Intranet
Inappropriate or incorrect information can be posted on an intranet which can reduce its credibility and effectiveness.
In a devolved and highly interactive intranet there is freedom to post abusive and possibly illegal materials. There is a balance to be struck between taking advantage of this freedom to achieve corporate goals and having appropriate controls in place to meet an organization's legal or moral responsibilities.
Need expertise in field to administer and develop Intranet information within the organization.
Security of the intranet becomes an issue. Other users may post sensitive information which may appear to another user. Furthermore, in an industry with high turnover there is the potential for an employee to acquire sensitive information which may significantly benefit their new position at a competing company.
As information can be posted by any user, information overload may occur during the cause if it is not controlled well.
Planning and creating an intranet
Most organizations devote considerable resources into the planning and implementation of their intranet as it is of strategic importance to the organization's success. Some of the planning would include topics such as:
What they hope to achieve from the intranet
Which person or department would "own" (take control of) the technology and the implementation
How and when existing systems would be phased out/replaced
How they intend to make the intranet secure
How they'll ensure to keep it within legislative and other constraints
Level of interactivity (eg wikis, on-line forms) desired.
Is the input of new data and updating of existing data to be centrally controlled or devolved.
These are in addition to the hardware and software decisions (like Content Management Systems), participation issues (like good taste, harassment, confidentiality), and features to be supported [3].
The actual implementation would include steps such as
User involvement to identify users' information needs.
Setting up a web server with the correct hardware and software.
Setting up web server access using a TCP/IP network.
Installing the user programs on all required computers.
Creating a homepage for the content to be hosted.[4]
User involvement in testing and promoting use of intranet.
An intranet is a private computer network that uses Internet protocols, network connectivity to securely share part of an organization's information or operations with its employees. Sometimes the term refers only to the most visible service, the internal website. The same concepts and technologies of the Internet such as clients and servers running on the Internet protocol suite are used to build an intranet. HTTP and other Internet protocols are commonly used as well, such as FTP. There is often an attempt to use Internet technologies to provide new interfaces with corporate "legacy" data and information systems.
Briefly, an intranet can be understood as "a private version of the Internet," or as a version of the Internet confined to an organization. The term first appeared in print on April 19, 1995, in Digital News & Review in an article authored by technical editor Stephen Lawton
About Extranets
Intranets differ from "Extranets" in that the former is generally restricted to employees of the organization while extranets can generally be accessed by customers, suppliers, or other approved parties.
There does not necessarily have to be any access from the organization's internal network to the internet itself. When such access is provided it is usually through a gateway with a firewall, along with user authentication, encryption of messages, and often make use of virtual private networks (VPN's). Through such devices and systems off-site employees can access company information, computing resources and internal communications.
Increasingly, intranets are being used to deliver tools and applications, e.g., collaboration (to facilitate working in groups and teleconferencing) or sophisticated corporate directories, sales and CRM tools, project management etc., to advance productivity.
Intranets are also being used as culture change platforms. For example, large numbers of employees discussing key issues in an online forums could lead to new ideas.
Intranet traffic, like public-facing web site traffic, is better understood by using web metrics software to track overall activity, as well as through surveys of users.
Intranet "User Experience", "Editorial", and "Technology" teams work together to produce in-house sites. Most commonly, intranets are owned by the communications, HR or CIO areas of large organizations, or some combination of the three.
Advantages Of Intranet
Workforce productivity: Intranets can help users to locate and view information faster and use applications relevant to their roles and responsibilities. With the help of a web browser interface such as Internet Explorer or Firefox, users can access data held in any database the organization wants to make available, anytime and - subject to security provisions - from anywhere within the company workstations, increasing employees' ability to perform their jobs faster, more accurately, and with confidence that they have the right information. It also helps to improve the services provided to the users.
Time: With intranets, organizations can make more information available to employees on a "pull" basis (ie: employees can link to relevant information at a time which suits them) rather than being deluged indiscriminately by emails.
Communication: Intranets can serve as powerful tools for communication within an organization, vertically and horizontally. From a communications standpoint, intranets are useful to communicate strategic initiatives that have a global reach throughout the organization. The type of information that can easily be conveyed is the purpose of the initiative and what the initiative is aiming to achieve, who is driving the initiative, results achieved to date, and who to speak to for more information. By providing this information on the intranet, staff have the opportunity to keep up-to-date with the strategic focus of the organisation.
Web publishing allows 'cumbersome' corporate knowledge to be maintained and easily accessed throughout the company using hypermedia and Web technologies. Examples include: employee manuals, benefits documents, company policies, business standards, newsfeeds, and even training, can be accessed using common Internet standards (Acrobat files, Flash files, CGI applications). Because each business unit can update the online copy of a document, the most recent version is always available to employees using the intranet.
Business operations and management: Intranets are also being used as a platform for developing and deploying applications to support business operations and decisions across the internetworked enterprise.
Cost-effective: Users can view information and data via web-browser rather than maintaining physical documents such as procedure manuals, internal phone list and requistion forms.
Promote common corporate culture: Every user is viewing the same information within the Intranet.
Enhance Collaboration: With information easily accessible by all authorised users, teamwork is enabled.
Cross-platform Capability: Standards-compliant web browsers are available for Windows, Mac, and *NIX.
Disadvantages Of Intranet
Inappropriate or incorrect information can be posted on an intranet which can reduce its credibility and effectiveness.
In a devolved and highly interactive intranet there is freedom to post abusive and possibly illegal materials. There is a balance to be struck between taking advantage of this freedom to achieve corporate goals and having appropriate controls in place to meet an organization's legal or moral responsibilities.
Need expertise in field to administer and develop Intranet information within the organization.
Security of the intranet becomes an issue. Other users may post sensitive information which may appear to another user. Furthermore, in an industry with high turnover there is the potential for an employee to acquire sensitive information which may significantly benefit their new position at a competing company.
As information can be posted by any user, information overload may occur during the cause if it is not controlled well.
Planning and creating an intranet
Most organizations devote considerable resources into the planning and implementation of their intranet as it is of strategic importance to the organization's success. Some of the planning would include topics such as:
What they hope to achieve from the intranet
Which person or department would "own" (take control of) the technology and the implementation
How and when existing systems would be phased out/replaced
How they intend to make the intranet secure
How they'll ensure to keep it within legislative and other constraints
Level of interactivity (eg wikis, on-line forms) desired.
Is the input of new data and updating of existing data to be centrally controlled or devolved.
These are in addition to the hardware and software decisions (like Content Management Systems), participation issues (like good taste, harassment, confidentiality), and features to be supported [3].
The actual implementation would include steps such as
User involvement to identify users' information needs.
Setting up a web server with the correct hardware and software.
Setting up web server access using a TCP/IP network.
Installing the user programs on all required computers.
Creating a homepage for the content to be hosted.[4]
User involvement in testing and promoting use of intranet.
GRID COMPUTING
Grid computing
Grid computing is a phrase in distributed computing which can have several meanings:
· A local computer cluster which is like a "grid" because it is composed of multiple nodes.
· Offering online computation or storage as a metered commercial service, known as utility computing, computing on demand, or cloud computing.
· The creation of a "virtual supercomputer" by using spare computing resources within an organization.
· The creation of a "virtual supercomputer" by using a network of geographically dispersed computers. Volunteer computing, which generally focuses on scientific, mathematical, and academic problems, is the most common application of this technology.
These varying definitions cover the spectrum of "distributed computing", and sometimes the two terms are used as synonyms. This article focuses on distributed computing technologies which are not in the traditional dedicated clusters; otherwise, see computer cluster.
Functionally, one can also speak of several types of grids:
· Computational grids (including CPU Scavenging grids) which focuses primarily on computationally-intensive operations.
· Data grids or the controlled sharing and management of large amounts of distributed data.
· Equipment grids which have a primary piece of equipment e.g. a telescope, and where the surrounding Grid is used to control the equipment remotely and to analyze the data produced.
Grids versus conventional supercomputers
"Distributed" or "grid computing" in general is a special type of parallel computing which relies on complete computers (with onboard CPU, storage, power supply, network interface, etc.) connected to a network (private, public or the Internet) by a conventional network interface, such as Ethernet. This is in contrast to the traditional notion of a supercomputer, which has many CPUs connected by a local high-speed computer bus.
The primary advantage of distributed computing is that each node can be purchased as commodity hardware, which when combined can produce similar computing resources to a many-CPU supercomputer, but at lower cost. This is due to the economies of scale of producing commodity hardware, compared to the lower efficiency of designing and constructing a small number of custom supercomputers. The primary performance disadvantage is that the various CPUs and local storage areas do not have high-speed connections. This arrangement is thus well-suited to applications where multiple parallel computations can take place independently, without the need to communicate intermediate results between CPUs.
The high-end scalability of geographically dispersed grids is generally favorable, due to the low need for connectivity between nodes relative to the capacity of the public Internet. Conventional supercomputers also create physical challenges in supplying sufficient electricity and cooling capacity in a single location. Both supercomputers and grids can be used to run multiple parallel computations at the same time, which might be different simulations for the same project, or computations for completely different applications. The infrastructure and programming considerations needed to do this on each type of platform are different, however.
There are also differences in programming and deployment. It can be costly and difficult to write programs so that they can be run in the environment of a supercomputer, which may have a custom operating system, or require the program to address concurrency issues. If a problem can be adequately parallelized, a "thin" layer of "grid" infrastructure can cause conventional, standalone programs to run on multiple machines (but each given a different part of the same problem). This makes it possible to write and debug programs on a single conventional machine, and eliminates complications due to multiple instances of the same program running in the same shared memory and storage space at the same time.
Design considerations and variations
One feature of distributed grids is that they can be formed from computing resources belonging to multiple individuals or organizations (known as multiple administrative domains). This can facilitate commercial transactions, as in utility computing, or make it easier to assemble volunteer computing networks.
One disadvantage of this feature is that the computers which are actually performing the calculations might not be entirely trustworthy. The designers of the system must thus introduce measures to prevent malfunctions or malicious participants from producing false, misleading, or erroneous results, and from using the system as an attack vector. This often involves assigning work randomly to different nodes (presumably with different owners) and checking that at least two different nodes report the same answer for a given work unit. Discrepancies would identify malfunctioning and malicious nodes.
Due to the lack of central control over the hardware, there is no way to guarantee that nodes will not drop out of the network at random times. Some nodes (like laptops or dialup Internet customers) may also be available for computation but not network communications for unpredictable periods. These variations can be accommodated by assigning large work units (thus reducing the need for continuous network connectivity) and reassigning work units when a given node fails to report its results as expected.
The impacts of trust and availability on performance and development difficulty can influence the choice of whether to deploy onto a dedicated computer cluster, to idle machines internal to the developing organization, or to an open external network of volunteers or contractors.
In many cases, the participating nodes must trust the central system not to abuse the access that is being granted, by interfering with the operation of other programs, mangling stored information, transmitting private data, or creating new security holes. Other systems employ measures to reduce the amount of trust "client" nodes must place in the central system such as placing applications in virtual machines.
Public systems or those crossing administrative domains (including different departments in the same organization) often result in the need to run on heterogeneous systems, using different operating systems and hardware architectures. With many languages, there is a tradeoff between investment in software development and the number of platforms that can be supported (and thus the size of the resulting network). Cross-platform languages can reduce the need to make this tradeoff, though potentially at the expense of high performance on any given node (due to run-time interpretation or lack of optimization for the particular platform).
Various middleware projects have created generic infrastructure, to allow various scientific and commercial projects to harness a particular associated grid, or for the purpose of setting up new grids. BOINC is a common one for academic projects seeking public volunteers; more are listed at the end of the article
CPU scavenging
CPU-scavenging, cycle-scavenging, cycle stealing, or shared computing creates a "grid" from the unused resources in a network of participants (whether worldwide or internal to an organization). Usually this technique is used to make use of instruction cycles on desktop computers that would otherwise be wasted at night, during lunch, or even in the scattered seconds throughout the day when the computer is waiting for user input or slow devices.
Volunteer computing projects use the CPU scavenging model almost exclusively.
In practice, participating computers also donate some supporting amount of disk storage space, RAM, and network bandwidth, in addition to raw CPU power. Nodes in this model are also more vulnerable to going "offline" in one way or another from time to time, as their owners use their resources for their primary purpose.
History
The term Grid computing originated in the early 1990s as a metaphor for making computer power as easy to access as an electric power grid in Ian Foster and Carl Kesselmans seminal work, "The Grid: Blueprint for a new computing infrastructure".
CPU scavenging and volunteer computing were popularized beginning in 1997 by distributed.net and later in 1999 by SETI@home to harness the power of networked PCs worldwide, in order to solve CPU-intensive research problems.
The ideas of the grid (including those from distributed computing, object oriented programming, cluster computing, web services and others) were brought together by Ian Foster, Carl Kesselman and Steve Tuecke, widely regarded as the "fathers of the grid[1]." They led the effort to create the Globus Toolkit incorporating not just computation management but also storage management, security provisioning, data movement, monitoring and a toolkit for developing additional services based on the same infrastructure including agreement negotiation, notification mechanisms, trigger services and information aggregation. While the Globus Toolkit remains the defacto standard for building grid solutions, a number of other tools have been built that answer some subset of services needed to create an enterprise or global grid.
Fastest virtual supercomputers
Current projects and applications
Grids offer a way to solve Grand Challenge problems like protein folding, financial modeling, earthquake simulation, and climate/weather modeling. Grids offer a way of using the information technology resources optimally inside an organization. They also provide a means for offering information technology as a utility for commercial and non-commercial clients, with those clients paying only for what they use, as with electricity or water.
Grid computing is presently being applied successfully by the National Science Foundation's National Technology Grid, NASA's Information Power Grid, Pratt & Whitney, Bristol-Myers Squibb, Co., and American Express.[citation needed]
One of the most famous cycle-scavenging networks is SETI@home, which was using more than 3 million computers to achieve 23.37 sustained teraflops (979 lifetime teraflops) as of September 2001 [3].
As of May 2005, Folding@home had achieved peaks of 186 teraflops on over 160,000 machines.
Another well-known project is distributed.net, which was started in 1997 and has run a number of successful projects in its history.
The NASA Advanced Supercomputing facility (NAS) has run genetic algorithms using the Condor cycle scavenger running on about 350 Sun and SGI workstations.
Until April 27, 2007, United Devices operated the United Devices Cancer Research Project based on its Grid MP product, which cycle scavenges on volunteer PCs connected to the Internet. As of June 2005, the Grid MP ran on about 3,100,000 machines [4].
The Enabling Grids for E-sciencE project, which is based in the European Union and includes sites in Asia and the United States, is a follow up project to the European DataGrid (EDG) and is arguably the largest computing grid on the planet. This, along with the LHC Computing Grid [4] (LCG) have been developed to support the experiments using the CERN Large Hadron Collider. The LCG project is driven by CERN's need to handle huge amounts of data, where storage rates of several gigabytes per second (10 petabytes per year) are required. A list of active sites participating within LCG can be found online[5] as can real time monitoring of the EGEE infrastructure.[6] The relevant software and documentation is also publicly accessible.[7]
Definitions
Today there are many definitions of Grid computing:
· The definitive definition of a Grid is provided by Ian Foster in his article "What is the Grid? A Three Point Checklist"[8] The three points of this checklist are:
· Computing resources are not administered centrally.
· Open standards are used.
· Non-trivial quality of service is achieved.
· Plaszczak/Wellner[9] define grid technology as "the technology that enables resource virtualization, on-demand provisioning, and service (resource) sharing between organizations."
· IBM defines grid computing as "the ability, using a set of open standards and protocols, to gain access to applications and data, processing power, storage capacity and a vast array of other computing resources over the Internet. A grid is a type of parallel and distributed system that enables the sharing, selection, and aggregation of resources distributed across 'multiple' administrative domains based on their (resources) availability, capacity, performance, cost and users' quality-of-service requirements" [10]
· An earlier example of the notion of computing as utility was in 1965 by MIT's Fernando Corbató. Fernando and the other designers of the Multics operating system envisioned a computer facility operating "like a power company or water company". http://www.multicians.org/fjcc3.html
· Buyya defines a grid as "a type of parallel and distributed system that enables the sharing, selection, and aggregation of geographically distributed autonomous resources dynamically at runtime depending on their availability, capability, performance, cost, and users' quality-of-service requirements".[11]
· CERN, one of the largest users of grid technology, talk of The Grid: "a service for sharing computer power and data storage capacity over the Internet." [12]
· Pragmatically, grid computing is attractive to geographically-distributed non-profit collaborative research efforts like the NCSA Bioinformatics Grids such as BIRN: external grids.
· Grid computing is also attractive to large commercial enterprises with complex computation problems who aim to fully exploit their internal computing power: internal grids.
· A recent survey (done by Heinz Stockinger in spring 2006; to be published in the Journal of Supercomputing in early 2007) presents a snapshot on the view in 2006.
· Another survey (done by Miguel L. Bote-Lorenzo et al. in autumn 2002; published in the LNCS series of Springer-Verlag) presents a snapshot on the view in 2002.
Grids can be categorized with a three stage model of departmental grids, enterprise grids and global grids. These correspond to a firm initially utilising resources within a single group i.e. an engineering department connecting desktop machines, clusters and equipment. This progresses to enterprise grids where non-technical staff's computing resources can be used for cycle-stealing and storage. A global grid is a connection of enterprise and departmental grids which can be used in a commercial or collaborative manner.
Grid computing is a phrase in distributed computing which can have several meanings:
· A local computer cluster which is like a "grid" because it is composed of multiple nodes.
· Offering online computation or storage as a metered commercial service, known as utility computing, computing on demand, or cloud computing.
· The creation of a "virtual supercomputer" by using spare computing resources within an organization.
· The creation of a "virtual supercomputer" by using a network of geographically dispersed computers. Volunteer computing, which generally focuses on scientific, mathematical, and academic problems, is the most common application of this technology.
These varying definitions cover the spectrum of "distributed computing", and sometimes the two terms are used as synonyms. This article focuses on distributed computing technologies which are not in the traditional dedicated clusters; otherwise, see computer cluster.
Functionally, one can also speak of several types of grids:
· Computational grids (including CPU Scavenging grids) which focuses primarily on computationally-intensive operations.
· Data grids or the controlled sharing and management of large amounts of distributed data.
· Equipment grids which have a primary piece of equipment e.g. a telescope, and where the surrounding Grid is used to control the equipment remotely and to analyze the data produced.
Grids versus conventional supercomputers
"Distributed" or "grid computing" in general is a special type of parallel computing which relies on complete computers (with onboard CPU, storage, power supply, network interface, etc.) connected to a network (private, public or the Internet) by a conventional network interface, such as Ethernet. This is in contrast to the traditional notion of a supercomputer, which has many CPUs connected by a local high-speed computer bus.
The primary advantage of distributed computing is that each node can be purchased as commodity hardware, which when combined can produce similar computing resources to a many-CPU supercomputer, but at lower cost. This is due to the economies of scale of producing commodity hardware, compared to the lower efficiency of designing and constructing a small number of custom supercomputers. The primary performance disadvantage is that the various CPUs and local storage areas do not have high-speed connections. This arrangement is thus well-suited to applications where multiple parallel computations can take place independently, without the need to communicate intermediate results between CPUs.
The high-end scalability of geographically dispersed grids is generally favorable, due to the low need for connectivity between nodes relative to the capacity of the public Internet. Conventional supercomputers also create physical challenges in supplying sufficient electricity and cooling capacity in a single location. Both supercomputers and grids can be used to run multiple parallel computations at the same time, which might be different simulations for the same project, or computations for completely different applications. The infrastructure and programming considerations needed to do this on each type of platform are different, however.
There are also differences in programming and deployment. It can be costly and difficult to write programs so that they can be run in the environment of a supercomputer, which may have a custom operating system, or require the program to address concurrency issues. If a problem can be adequately parallelized, a "thin" layer of "grid" infrastructure can cause conventional, standalone programs to run on multiple machines (but each given a different part of the same problem). This makes it possible to write and debug programs on a single conventional machine, and eliminates complications due to multiple instances of the same program running in the same shared memory and storage space at the same time.
Design considerations and variations
One feature of distributed grids is that they can be formed from computing resources belonging to multiple individuals or organizations (known as multiple administrative domains). This can facilitate commercial transactions, as in utility computing, or make it easier to assemble volunteer computing networks.
One disadvantage of this feature is that the computers which are actually performing the calculations might not be entirely trustworthy. The designers of the system must thus introduce measures to prevent malfunctions or malicious participants from producing false, misleading, or erroneous results, and from using the system as an attack vector. This often involves assigning work randomly to different nodes (presumably with different owners) and checking that at least two different nodes report the same answer for a given work unit. Discrepancies would identify malfunctioning and malicious nodes.
Due to the lack of central control over the hardware, there is no way to guarantee that nodes will not drop out of the network at random times. Some nodes (like laptops or dialup Internet customers) may also be available for computation but not network communications for unpredictable periods. These variations can be accommodated by assigning large work units (thus reducing the need for continuous network connectivity) and reassigning work units when a given node fails to report its results as expected.
The impacts of trust and availability on performance and development difficulty can influence the choice of whether to deploy onto a dedicated computer cluster, to idle machines internal to the developing organization, or to an open external network of volunteers or contractors.
In many cases, the participating nodes must trust the central system not to abuse the access that is being granted, by interfering with the operation of other programs, mangling stored information, transmitting private data, or creating new security holes. Other systems employ measures to reduce the amount of trust "client" nodes must place in the central system such as placing applications in virtual machines.
Public systems or those crossing administrative domains (including different departments in the same organization) often result in the need to run on heterogeneous systems, using different operating systems and hardware architectures. With many languages, there is a tradeoff between investment in software development and the number of platforms that can be supported (and thus the size of the resulting network). Cross-platform languages can reduce the need to make this tradeoff, though potentially at the expense of high performance on any given node (due to run-time interpretation or lack of optimization for the particular platform).
Various middleware projects have created generic infrastructure, to allow various scientific and commercial projects to harness a particular associated grid, or for the purpose of setting up new grids. BOINC is a common one for academic projects seeking public volunteers; more are listed at the end of the article
CPU scavenging
CPU-scavenging, cycle-scavenging, cycle stealing, or shared computing creates a "grid" from the unused resources in a network of participants (whether worldwide or internal to an organization). Usually this technique is used to make use of instruction cycles on desktop computers that would otherwise be wasted at night, during lunch, or even in the scattered seconds throughout the day when the computer is waiting for user input or slow devices.
Volunteer computing projects use the CPU scavenging model almost exclusively.
In practice, participating computers also donate some supporting amount of disk storage space, RAM, and network bandwidth, in addition to raw CPU power. Nodes in this model are also more vulnerable to going "offline" in one way or another from time to time, as their owners use their resources for their primary purpose.
History
The term Grid computing originated in the early 1990s as a metaphor for making computer power as easy to access as an electric power grid in Ian Foster and Carl Kesselmans seminal work, "The Grid: Blueprint for a new computing infrastructure".
CPU scavenging and volunteer computing were popularized beginning in 1997 by distributed.net and later in 1999 by SETI@home to harness the power of networked PCs worldwide, in order to solve CPU-intensive research problems.
The ideas of the grid (including those from distributed computing, object oriented programming, cluster computing, web services and others) were brought together by Ian Foster, Carl Kesselman and Steve Tuecke, widely regarded as the "fathers of the grid[1]." They led the effort to create the Globus Toolkit incorporating not just computation management but also storage management, security provisioning, data movement, monitoring and a toolkit for developing additional services based on the same infrastructure including agreement negotiation, notification mechanisms, trigger services and information aggregation. While the Globus Toolkit remains the defacto standard for building grid solutions, a number of other tools have been built that answer some subset of services needed to create an enterprise or global grid.
Fastest virtual supercomputers
Current projects and applications
Grids offer a way to solve Grand Challenge problems like protein folding, financial modeling, earthquake simulation, and climate/weather modeling. Grids offer a way of using the information technology resources optimally inside an organization. They also provide a means for offering information technology as a utility for commercial and non-commercial clients, with those clients paying only for what they use, as with electricity or water.
Grid computing is presently being applied successfully by the National Science Foundation's National Technology Grid, NASA's Information Power Grid, Pratt & Whitney, Bristol-Myers Squibb, Co., and American Express.[citation needed]
One of the most famous cycle-scavenging networks is SETI@home, which was using more than 3 million computers to achieve 23.37 sustained teraflops (979 lifetime teraflops) as of September 2001 [3].
As of May 2005, Folding@home had achieved peaks of 186 teraflops on over 160,000 machines.
Another well-known project is distributed.net, which was started in 1997 and has run a number of successful projects in its history.
The NASA Advanced Supercomputing facility (NAS) has run genetic algorithms using the Condor cycle scavenger running on about 350 Sun and SGI workstations.
Until April 27, 2007, United Devices operated the United Devices Cancer Research Project based on its Grid MP product, which cycle scavenges on volunteer PCs connected to the Internet. As of June 2005, the Grid MP ran on about 3,100,000 machines [4].
The Enabling Grids for E-sciencE project, which is based in the European Union and includes sites in Asia and the United States, is a follow up project to the European DataGrid (EDG) and is arguably the largest computing grid on the planet. This, along with the LHC Computing Grid [4] (LCG) have been developed to support the experiments using the CERN Large Hadron Collider. The LCG project is driven by CERN's need to handle huge amounts of data, where storage rates of several gigabytes per second (10 petabytes per year) are required. A list of active sites participating within LCG can be found online[5] as can real time monitoring of the EGEE infrastructure.[6] The relevant software and documentation is also publicly accessible.[7]
Definitions
Today there are many definitions of Grid computing:
· The definitive definition of a Grid is provided by Ian Foster in his article "What is the Grid? A Three Point Checklist"[8] The three points of this checklist are:
· Computing resources are not administered centrally.
· Open standards are used.
· Non-trivial quality of service is achieved.
· Plaszczak/Wellner[9] define grid technology as "the technology that enables resource virtualization, on-demand provisioning, and service (resource) sharing between organizations."
· IBM defines grid computing as "the ability, using a set of open standards and protocols, to gain access to applications and data, processing power, storage capacity and a vast array of other computing resources over the Internet. A grid is a type of parallel and distributed system that enables the sharing, selection, and aggregation of resources distributed across 'multiple' administrative domains based on their (resources) availability, capacity, performance, cost and users' quality-of-service requirements" [10]
· An earlier example of the notion of computing as utility was in 1965 by MIT's Fernando Corbató. Fernando and the other designers of the Multics operating system envisioned a computer facility operating "like a power company or water company". http://www.multicians.org/fjcc3.html
· Buyya defines a grid as "a type of parallel and distributed system that enables the sharing, selection, and aggregation of geographically distributed autonomous resources dynamically at runtime depending on their availability, capability, performance, cost, and users' quality-of-service requirements".[11]
· CERN, one of the largest users of grid technology, talk of The Grid: "a service for sharing computer power and data storage capacity over the Internet." [12]
· Pragmatically, grid computing is attractive to geographically-distributed non-profit collaborative research efforts like the NCSA Bioinformatics Grids such as BIRN: external grids.
· Grid computing is also attractive to large commercial enterprises with complex computation problems who aim to fully exploit their internal computing power: internal grids.
· A recent survey (done by Heinz Stockinger in spring 2006; to be published in the Journal of Supercomputing in early 2007) presents a snapshot on the view in 2006.
· Another survey (done by Miguel L. Bote-Lorenzo et al. in autumn 2002; published in the LNCS series of Springer-Verlag) presents a snapshot on the view in 2002.
Grids can be categorized with a three stage model of departmental grids, enterprise grids and global grids. These correspond to a firm initially utilising resources within a single group i.e. an engineering department connecting desktop machines, clusters and equipment. This progresses to enterprise grids where non-technical staff's computing resources can be used for cycle-stealing and storage. A global grid is a connection of enterprise and departmental grids which can be used in a commercial or collaborative manner.
SECURITY
Security
An extranet requires security and privacy. These can include firewalls, server management, the issuance and use of digital certificates or similar means of user authentication, encryption of messages, and the use of virtual private networks (VPNs) that tunnel through the public network.
Many technical specifications describe methods of implementing extranets, but often never explicitly define an extranet. RFC 3547 presents requirements for remote access to extranets. RFC 2709 discusses extranet implementation using IPSec and advanced network address translation (NAT).
Industry uses
During the late 1990s and early 2000s, several industries started to use the term "extranet" to describe central repositories of shared data made accessible via the web only to authorized members of particular work groups.
For example, in the construction industry, project teams could login to and access a 'project extranet' to share drawings and documents, make comments, issue requests for information, etc. In 2003 in the United Kingdom, several of the leading vendors formed the Network of Construction Collaboration Technology Providers, or NCCTP, to promote the technologies and to establish data exchange standards between the different systems. The same type of construction-focused technologies have also been developed in the United States, Australia, Scandinavia, Germany and Belgium, among others. Some applications are offered on a Software as a Service (SaaS) basis by vendors functioning as Application service providers (ASPs).
Specially secured extranets are used to provide virtual data room services to companies in several sectors (including law and accountancy).
There are a variety of commercial extranet applications, some of which are for pure file management, and others which include broader collaboration and project management tools. Also exist a variety of Open Source extranet applications and modules, which can be integrated into other online collaborative applications such as Content Management Systems.
Disadvantages
Extranets can be expensive to implement and maintain within an organization (e.g.: hardware, software, employee training costs) — if hosted internally instead of via an ASP.
Security of extranets can be a big concern when dealing with valuable information. System access needs to be carefully controlled to avoid sensitive information falling into the wrong hands.
Extranets can reduce personal contact (face-to-face meetings) with customers and business partners. This could cause a lack of connections made between people and a company, which hurts the business when it comes to loyalty of its business partners and customers.
An extranet requires security and privacy. These can include firewalls, server management, the issuance and use of digital certificates or similar means of user authentication, encryption of messages, and the use of virtual private networks (VPNs) that tunnel through the public network.
Many technical specifications describe methods of implementing extranets, but often never explicitly define an extranet. RFC 3547 presents requirements for remote access to extranets. RFC 2709 discusses extranet implementation using IPSec and advanced network address translation (NAT).
Industry uses
During the late 1990s and early 2000s, several industries started to use the term "extranet" to describe central repositories of shared data made accessible via the web only to authorized members of particular work groups.
For example, in the construction industry, project teams could login to and access a 'project extranet' to share drawings and documents, make comments, issue requests for information, etc. In 2003 in the United Kingdom, several of the leading vendors formed the Network of Construction Collaboration Technology Providers, or NCCTP, to promote the technologies and to establish data exchange standards between the different systems. The same type of construction-focused technologies have also been developed in the United States, Australia, Scandinavia, Germany and Belgium, among others. Some applications are offered on a Software as a Service (SaaS) basis by vendors functioning as Application service providers (ASPs).
Specially secured extranets are used to provide virtual data room services to companies in several sectors (including law and accountancy).
There are a variety of commercial extranet applications, some of which are for pure file management, and others which include broader collaboration and project management tools. Also exist a variety of Open Source extranet applications and modules, which can be integrated into other online collaborative applications such as Content Management Systems.
Disadvantages
Extranets can be expensive to implement and maintain within an organization (e.g.: hardware, software, employee training costs) — if hosted internally instead of via an ASP.
Security of extranets can be a big concern when dealing with valuable information. System access needs to be carefully controlled to avoid sensitive information falling into the wrong hands.
Extranets can reduce personal contact (face-to-face meetings) with customers and business partners. This could cause a lack of connections made between people and a company, which hurts the business when it comes to loyalty of its business partners and customers.
Subscribe to:
Posts (Atom)