Application Virtualization
The technology which let the application to be executed directly on an operating system without any installation and in its private environment not interfering with OS base settings like (registry, etc) is called Application Virtualization. So basically in non-technical terms Application virtualization means getting or running an application whenever and wherever you want without any installation.
Application virtualization technology is available with you from very long time, but the only thing is its usage and knowledge has been very limited. Earlier in operating systems when one of your application crashes, complete operating system crashes with it including all the other applications as all the applications and operating system are interlinked with each other. Application virtualization is one the solution to this problem, as the application runs in completely isolated environment and other application doesn’t know about it.
Generally in an enterprise environment, the full application must be pushed to the client machine and installed. You will require lot of bandwidth, disk space, etc other requirement in your network and client machines, depending on the size of the application. With application virtualization streaming option, you can configure client computer to just download the part of application needed to be executed and that also on request, leading to save lot of resources.
All our current problems, like need for application on demand, easy management of applications, easy deployment, application compatibility problems lead to the Application Virtualization Technology development.
Application Virtualization Advantages
• No installation required: Installing an application on many compute...
... middle of paper ...
...u always have the necessary resources and bandwidth available.
Application virtualization is a concept completely based on networks and resource sharing, which is what this subject is all about. Application virtualization technology deals with how an application can be virtualized at the client end from the server side using the network resources.
Now days, many companies are coming forward with Application Virtualization concept for IT environments. Some big players of virtualization are VMWare, Microsoft, Citrix, etc. VMWare had already launched its VMWare ThinApp version 5.0 last October where as Microsoft has also including new role App-V in its Server 2012. Also Microsoft client operating system provides compatibility troubleshooting based on Application Virtualization Concept. Soon this concept will be widely used and replace tradition working environments.
Application Virtualization: Application virtualization conveys an application that is facilitated on a solitary machine to a substantial number of clients. The application can be arranged in the cloud on high-review virtual machines be that as it may, in light of the fact that a substantial number of clients get to it, its expenses are some common by those clients.
Virtualization is a technology that creates an abstract version of a complete operating environment including a processor, memory, storage, network links, and a display entirely in software. Because the resulting runtime environment is completely software based, the software produces what’s called a virtual computer or a virtual machine (M.O., 2012). To simplify, virtualization is the process of running multiple virtual machines on a single physical machine. The virtual machines share the resources of one physical computer, and each virtual machine is its own environment.
Virtual machines operate based on the computer architecture and functions of a real or hypothetical computer, and their implementations may involve specialized hardware, software, or a combination of both.
As its core essences cloud computing is nothing but a specialized form of grid computing and distributing computing’s which various in terms of infrastructure , deployment, service and Geographic’s dispersion (Veeramachanenin, Sepetember 2015) the cloud enhance scalability, collaboration, availability , ability to adapt to fluctuation according to demand accelerate development work and provide optional for cost reduction and through efficient and optimized computing. (BH kawljeet, June 2015) cloud computing (CC) recently become as a new paradigm for the delivery and hosting of services our the internet. There are mainly three service delivery model Software as Service (SaaS) required software, operating system and network is provided or we can say in SaaS the customer can access the hosted software instead of installing it in local computer and the user can access these software through local computer internet browser (e.g web enabled E-mail ) the user only pay and the cloud service provider is responsible for management or control of mobile cloud infrastructure some of the company which provide such service are Google, Microsoft , Salesforce ,Facebook, etc…..Infrastructure as Service(IaaS)the cloud provider only provide some hardware resources such as network and virtualization is
This approach is significantly better than the traditional way, in which a company buys rackmount servers, and networking equipment, as virtual hardware can be repurposed and reused, unlike physical hardware, where new servers have to be bought when the app needs to be scaled to handle a higher load. This approach can also be used to have multiple low end machines work together to get the processing power and storage of a high end computer for a much lower price. Networking, in particular, costs much less money as everything is virtual and there is no physical networking hardware, and OpenStack creates a mesh-like network.
For that hardware virtualisation is more beneficial to handle all servers together and provide data from data centre of server to user virtual desktop.
Virtualization technologies provide isolation of operating systems from hardware. This separation enables hardware resource sharing. With virtualization, a system pretends to be two or more of the same system [23]. Most modern operating systems contain a simplified system of virtualization. Each running process is able to act as if it is the only thing running. The CPUs and memory are virtualized. If a process tries to consume all of the CPU, a modern operating system will pre-empt it and allow others their fair share. Similarly, a running process typically has its own virtual address space that the operating system maps to physical memory to give the process the illusion that it is the only user of RAM.
The fundamental idea behind a virtual machine is to remove the hardware of a single computer and make it a self-contained operating environment that behaves as it is a separate computer. Essentially, the virtual machine is software that executes an application and isolates it from the actual operating system and hardware. CPU scheduling and virtual-memory techniques are used so that an operating system can create the illusion that a process has its own processor with its own (virtual) memory. The virtual machine provides the ability to share the same hardware yet run several different operating systems concurrently, as shown in Figure 2-11.
In the large system, operating system plays more useful situation in assist the different types of program and users to runs the system at the same times with not interfere with each other.
Virtual memory is an old concept. Before computers utilized cache, they used virtual memory. Initially, virtual memory was introduced not only to extend primary memory, but also to make such an extension as easy as possible for programmers to use. Memory management is a complex interrelationship between processor hardware and operating system software. For virtual memory to work, a system needs to employ some sort of paging or segmentation scheme, or a combination of the two. Nearly all implementations of virtual memory divide a virtual address space into pages, which are blocks of contiguous virtual memory addresses. On the other hand, some systems use segmentation instead of paging. Segmentation divides virtual address spaces into variable-length segments. Segmentation and paging can be used together by dividing each segment into pages.
Virtual memory is a memory management technique implemented with the help of both hardware and software. It maps memory addresses used by a program, called virtual addre...
Each virtual network in a network virtualization environment is a collection of virtual nodes and virtual links. Essentially, a virtual network is subset of underlying physical network resources. Network virtualization proposes decoupling of functionalities in a networking environment by separating the role of traditional ISPs into InP’s (Infrastructure Providers) who manages the physical infrastructure and SP’s (Service Providers) who creates virtual networks by aggregating resources from multiple InP’s and offers end-to-end network services. In order to build virtual network enabled networks the following specifications must be fulfilled. Robustness: The network should continue to operate in the event of node or link failure. Manageability: The InP must have a view of the underlying physical topology, state and other parameters associated with the equipment providing the virtual network. Traffic and network resource control: Traffic engineering and management techniques performed by the InP must not restrict the basic operation of a Virtual network. Isolation: Mechanisms for isolation between virtual networks must be provided. It must be guaranteed that malfunctioning virtual networks do not affect the performance of other virtual networks sharing the same resources. Scalability: Any technical solution must be scalable to cope up with any number of virtual networks. On-demand provisioning: It must be possible to create, modify and remove virtual networks dynamically on request. In general a virtual network will have a limited lifespan.
Cloud computing is a type of computing that depends on sharing computing resources rather than having local servers or personal device to handle applications.
One of the main tools which are widely being implemented worldwide is Value Management. Recently, implementation of VM is must as per many standards being followed internationally. It became a part of project and it is a continuous process right from initiation to hand over. It is being implemented even in operation and maintenance phase.
Virtualization, which is when one computer hosts the appearance of many computers will be necessary for my future. Virtualization will be needed to access the company’s database to confirm client’s flight reservations, purchase orders, etc. We would also need it to have flight simulators for new parts at the manufacturing company. For example, we would need to have a flight simulator that will give us all the information a real computer would and that would respond to our commands, to test the plane without having to put the actual expensive software and hardware in the plane. The second concept that can be applied to my dream job is database management system. I would need this concept to be able to have an extensive amount of data that is accurate, to control redundancy, and to have a safe backup/recovery process. As an executive officer, I need to be able to rely on database management system to give me accurate information on parts that have caused flights to take emergency landings for example. This could include all the materials used in the production of the product or vendor information. Finally, the WAN concept is essential for my dream job because I need to be able to have computers that connect between two or more different sites. For a manufacturing company, there would be multiple sites where items