›High performance computing (HPC) is the utilization of parallel processing for running propelled application programs proficiently, dependably and quickly.

The most well-known clients of HPC systems are experimental analysts, designers and academic foundations. Some administration offices, (Dell, 2015) especially the military, additionally depend on HPC for complex applications. Superior systems frequently utilize hand crafted parts not withstanding supposed ware segments.


›HPC solutions help you focus on your work, while getting the computational power you need through:

  • ›Expert guidance and community-based collaboration to help you reach breakthroughs faster
  • ›Simplified integration of x86 platforms and open standards technologies that make it easier to select, deploy and manage clusters
  • ›•  Tested and validated solutions that are integrated to deliver optimal performance, reliability and efficiency.


›Heat management is a major issue in complex electronic devices, and affects powerful computer systems in various ways. The thermal design power and CPU power dissipation issues in super computing surpass those of traditional computer cooling technologies. The super computing awards for green computing reflect this issue.

For energy savings and convenience, consider the following guidelines:

  • ››Turn off the monitor if you aren’t going to use your PC for more than 20 minutes.
  • ››Turn off both the CPU and monitor if you’re not going to use your PC for more than 2 hours.






Linux is one of famous rendition of UNIX working Framework. It is open source as its source code is uninhibitedly accessible. It is allowed to utilize. Linux was composed considering UNIX similarity. It’s usefulness rundown is entirely like that of UNIX. (Linuxtopia.org) The essential design of Linux depends on Kernel. The principal Linux portion was produced in 1991. It is ported to numerous PC structures. All the Linux code can be altered free of expense and the redistribution is done on the business and non business routes by securing a permit structure GNU.


Component of Linux System;


KERNAL – Portion is the center some portion of Linux. It is in charge of every major activities of this working framework. It is comprises of different modules and it connects straightforwardly with the hidden equipment. Kernel gives the obliged reflection to conceal low-level equipment points of interest to framework or application programs.


SYSTEM LIBRARY – System libraries are unique capacities or a project utilizing which application projects or system utilities gets to Kernel’s elements. These libraries actualizes the majority of the functionalities of the working system and don’t requires part module’s code access rights.


SYSTEM ULTILITY – System Utility projects (programs) are capable to do specific, singular level assignments.



Kernel component code executes in an extraordinary favored mode called Kernel mode with full access to all assets of the PC. This code speaks to a single procedure, executes in single location space and don’t require any connection switch and henceforth is exceptionally proficient and quick.


User programs and other system programs works in user Mode that has no entrance to framework hardware’s and kernel code. User programs/utilities use Framework libraries to get to Kernel capacities to get framework’s low-level tasks.



PORTABLE – Convey ability implies virtual products or hardware can takes a shot at various sorts of durable goods in same way. A Linux kernel and application program bolsters their establishment on any sort of hardware platform.

OPEN SOURCE – Linux source code is unreservedly accessible and it is group based improvement venture. Various groups works in coordinated effort to improve the ability of Linux framework and it is continuously evolving.


MULTI USER – Linux is a multiuser system means multiple users can access system resources like memory, RAM, application programs at same time.


MULTIPROGRAMMING – Linux is a multiprogramming system means multiple applications can run at same time.


HIERARCHICAL FILE SYSTEM – Linux provides a standard file structure in which system files/ user files are arranged.

SHELL – Linux gives an uncommon mediator program, which can be utilized to execute charges of the working framework (Operating system). It can be utilized to do different sorts of operations, call application programs and so forth.

SECURITY – Linux gives client security utilizing verification highlights like secret key assurance/controlled access to particular documents/encryption of information.



Linux System Architecture is consists of following layers

HARDWARE LAYER – Hardware consists of all peripheral devices (RAM/ HDD/ CPU etc).

 KERNEL – Core component of Operating System, interacts directly with hardware, provides low level services to upper layer components.

 SHELL – An interface to kernel, hiding complexity of kernel’s functions from users. Takes commands from user and executes kernel’s functions.

 ULTILITIS – Utility programs gives user most of the functionalities of an operating systems.


Linuxtopia.org,. “Linux Security For Beginners – Security And Linux”. N.p., 2016. Web. 28 Feb. 2016.

Ibm.com,. “Anatomy Of The Linux Kernel”. N.p., 2016. Web. 28 Feb. 2016.



Linux operating system

The architecture of linux operating system


The two software components of the linux operating system are;

  • Kernel,
  • And shell

The kernel software is the core, center or the essence of an object or operating system, which provide the basic services of all parts of the operating system. Kernel controls the resources of the computer by setting the computers aside to different users and tasks (work). It enable the users to write a set of instructions easily and makes the instructions to be able to run on a multiple hardware or operating systems according different hardware platforms by directly acting upon the hardware.

A shell is a user interface that helps the linux user to act with the linux system through it. The shell provides services needed by the linux user. The shell is also used as a command line interface (CLI) to run the commands and programs in the linux.

The usefulness and command of linux are the collection of program that performs a process in a linux system. Database management system and word processing are available in linux system. Mr. Linus Torvalds was the person that developed the linux system at the University of Helsinki in Finland in the year 1991. Linux Is one of the most popular operating system because of its free wellspring distribution and its compatibility with different hardware platforms like AMD and Intell platforms.

In order to interact with linux system, one copy of shell must be provided to each linux user. The shell and the kernel interact with each other through the system calls of linux.






Virtual Memory

Virtual Memory



Virtual memory is a memory that compacts with a main size of memory control, it has a delusion of more memory than that of RAM in the system. It also distinct the logical memory from the physical memory.

  • Logical memory is the process of viewing the memory.
  • Physical memory is the way that the processor views the memory.

Overlaying was used before the invention of virtual memory which was used for the controlling of program performance. It is used to powers the intersection management process which causes huge liberation to the programmers, and it provides protection and relocation.

The protection means all the programs are secluded from each other, which is a benefit that will allow them to work in their own space of address. The implemented of the protection is easy.

The relocation also helps each program to have their own space of address, the details of the run-time do not have effect on the grouping of codes.

The principles of virtual memory are quite similar to that of cache memory but the details defer due to the objectives. The key is the ideas of locality which adventures both the spatial and temporal. The disk is the lower level memory which differentiate the implementation of the principles with several orders of slower magnitude.


The concepts of virtual memory

It implements a plotting function between the physical address space and the virtual address space.

For instance, the Pentium and the power PC

The Pentium; both the virtual and the physical address space have 32-bits address and they use segmentation.

The power PC; physical address spacing is 32-bits while virtual address spacing is 48-bits.


The physical memory is separated into two similar portions, the portions are stated as physical pages. And there are the;

  • Physical page number
  • Byte offset within the pages

The virtual address is also divided into two fixed portions, they are called the virtual pages. They are;

  • Virtual page number
  • Byte offset into virtual page number

(By: Saad)













Virtual Memory


It is a feature of operating system which fulfils the shortage of a primary memory by swapping some of the files or pages from random access memory (RAM) to the virtual memory.


Working of Virtual Memory

The basic idea for a memory available in computer system is that there are two separate work spaces available. Virtual memory is what the program uses or see e.g. ld. R4, 1024(R0) is accessed R0+1024=1024 in the virtual address and in MIPS it’s a 32 bit address space and other is the physical memory what the hardware uses. Physical memory is the RAM available in the computer system when we buy any computer, it come along with the computer hardware, physical memory can be removed or upgrade. The sizes available are 2GB RAM up to the capacity of the system support. The physical address of physical memory is 0 to 231–1 which two billion bites per memory is. Address space is determined that how much RAM we have installed. If the installed ram is little than physical address is small but if the RAM installed is big than the physical address will be big.

Program Accesses a Memory

(Google images)

When a load in executed by a program it specifies a virtual address. Load will be executed in a program virtual address space e.g. ld. R4, 1024(R0) so the physical address will be 1024 then the computer will translate the virtual address to the physical address. The translator will guide that the 1024 matches the physical address 2 in the RAM. Then the computer will look for the data R4 in the RAM and gets the data then it will return back to the virtual memory. If the data is not in memory the translator will load the data from the disc and return the value back. If the data with VA 300 then the data will move to the translator and translator will guide that the data 300 is available in memory space 12 then it will return back the data. If we are doing any add instruction it will be done through registers do no translation is required. After data being loaded from the disc then the translator will be updated so next time if the data needs to be accessed it will be available in the RAM.

(Danish Shahzad)



























Virtual memory is a common part of most operating systems on desktop computers. It has become so common because it provides a big benefit for users at a very low cost. Virtual memory consolidates your PC’s RAM with brief space on your hard plate. At the point when RAM runs low, virtual memory moves information from RAM to a space called a paging document. Moving finish its work.

Demand Paging

Demand paging is a type of swapping in which pages of data are not copied from disk to RAM until they are needed. In contrast, some virtual memory systems use anticipatory paging, in which the operating system attempts to anticipate which data will be needed next and copies it to RAM before it is actually required.

How Virtual Memory Works

When a computer is running, many programs are simultaneously sharing the CPU. Each running program, plus the data structures needed to manage it, is called a process.

Each process is allocated an address space. This is a set of valid addresses that can be used. This address space can be changed dynamically. For example, the program might request additional memory (from dynamic memory allocation) from the operating system.

If a process tries to access an address that is not part of its address space, an error occurs, and the operating system takes over, usually killing the process (core dumps, etc).

How does virtual memory play a role? As you run a program, it generates addresses. Addresses are generated (for RISC machines) in one of three ways:

  • A load instruction
  • A store instruction
  • Fetching an instruction

Load/store creates data addresses, while fetching an instruction creates instruction addresses. Of course, RAM doesn’t distinguish between the two kinds of addresses. It just sees it as an address.

Each address generated by a program is considered virtual. It must be translated to a real physical address. Thus, address translation is occurring all the time. As you might imagine, this must be handled in hardware, if it’s to be done efficiently.

You might think translating each address from virtual to physical is a crazy idea, because of how slow it is. However, you get memory protection from address translation, so it’s worth the hardware needed to get memory protection.



Virtual memory

Virtual memory is a component of a working framework (OS) that permits a PC to make up for deficiencies of physical memory by incidentally exchanging pages of information from irregular access memory (RAM) to plate disk storage.

Inevitably, the OS should recover the information that was moved to briefly to plate stockpiling – however recall, the main reason the OS moved pages of information from RAM to circle stockpiling in any case was on account of it was coming up short on RAM. To take care of the issue, the working framework should move different pages to hard plate so it has space to bring back the pages it needs immediately from impermanent circle stockpiling. This procedure is known as paging or swapping and the makeshift storage room on the hard circle is known as a pagefile or a swap file.


Swapping, which happens so rapidly that the end client doesn’t have any acquaintance with it’s going on, is completed by the PC’s memory administrator unit (MMU). The memory director unit may utilize one of a few calculations to pick which page ought to be swapped out, including Least Recently Used (LRU), Least Frequently Used (LFU) or Most Recently Used (MRU).

Since your PC has a limited measure of RAM, it is conceivable to come up short on memory when an excessive number of projects are running at one time. This is the place virtual memory comes in. Virtual memory builds the accessible memory your PC has by growing the “location space,” or places in memory where information can be put away. It does this by utilizing hard plate space for extra memory portion. Notwithstanding, subsequent to the hard drive is much slower than the RAM, information put away in virtual memory must be mapped back to genuine memory with a specific end goal to be utilized.


The procedure of mapping information forward and backward between the hard drive and the RAM takes longer than getting to it straightforwardly from the memory. This implies the more virtual memory is utilized, the more it will back your PC off. While virtual memory empowers your PC to run more projects than it could else, it is best to have however much physical memory as could reasonably be expected. This permits your PC to run most projects straightforwardly from the RAM, dodging the need to utilize virtual memory. Having more RAM means your PC works less, making it a quicker, more satisfied machine



most PCs today have something like 32 or 64 megabytes of RAM accessible for the CPU to utilize (perceive How RAM Works for points of interest on RAM). Lamentably, that measure of RAM is insufficient to run the majority of the projects that most clients hope to keep running without a moment’s delay.


For instance, in the event that you stack the working framework, an email program, a Web program and word processor into RAM at the same time, 32 megabytes is insufficient to hold it all. In the event that there were no such thing as virtual memory, then once you topped off the accessible RAM your PC would need to say, “Too bad, you can not stack any more applications. If you don’t mind close another application to stack another one.” With virtual memory, what the PC can do is take a gander at RAM for regions that have not been utilized as of late and duplicate them onto the hard circle. This arranges for space in RAM to stack the new application.


Since this duplicating happens consequently, you don’t know it is going on, and it makes your PC feel like is has boundless RAM space despite the fact that it just has 32 megabytes introduced. Since hard circle space is such a great amount of less expensive than RAM chips, it additionally has a decent financial advantage. ­


The read/compose pace of a hard drive is much slower than RAM, and the innovation of a hard drive is not intended for getting to little bits of information at once. On the off chance that your framework needs to depend too intensely on virtual memory, you will see a noteworthy execution drop. The key is to have enough RAM to handle all that you tend to deal with all the while – then, the main time you “feel” the gradualness of virtual memory is the point at which there’s a slight delay when you’re evolving errands. At the point when that is the situation, virtual memory is great.


When it is not the situation, the working framework needs to continually swap data forward and backward amongst RAM and the hard circle. This is called whipping, and it can make your PC feel extraordinarily moderate.

(Muhyiddeen Bello)



Operating frameworks, otherwise called portable OS, are the product that run our desktop PCs and tablets and deal with their assets and memory when they’re being utilized for multi-tasking. Those working framework is answerable for deciding the works and Characteristics accessible with respect to your device, for example, such that thumb-wheel, keyboards, WAP, synchronization for applications, email, content informing Also All the more. The portable OS will additionally focus which third-party provisions (mobile apps) can be utilized looking into your gadget.

Versatile operating frameworks consolidate Characteristics of a PC operating framework with different features functional for portable alternately handheld utilize; Typically including, and the vast majority of the Emulating viewed as fundamental for advanced portable frameworks; Siri recognition, cellular data, Bluetooth, google map, Wi-Fi, screen-touch etc.

Apple boss Tim Cook explained iOS 8 as a huge release comprised of new features for end users as well as additional capabilities for developers.  According to Marco, 2016 for users, I.OS 8 will include an interactive notification centre, an improved email inbox, a revamped keyboard with predictive typing, and a number of features for businesses. For developers, Apple announced a new programming language, dubbed Swift, to make it much faster to create apps for iOS.


  1. FaceTime
  2. ID Touch
  3. Multitasking
  4. iCloud etc.


i OS 8 looks and feels the same on its surface, as Apple didn’t change the way the home screen functions. However, almost every menu within has design tweaks that make your iPhone and iPad snappier to use. Double tapping the home button, for example, sends multitasking into overdrive. In addition to its usual swiping through open apps, the top of the screen now features circular profile photos of your most recent contacts. (Time, 2016).


The latest version of the iPhone and iPad software, iOS updates are free. And they’re available to download wirelessly on your iPhone, iPad, or iPod touch the moment they’re released. Your device even alerts you when it’s time to get the latest version and that has to be noted in this review. On its face, iOS 8.0.1 integrated data from third-party fitness-focused apps into Apple’s Heath app. But it was far from smooth, and iOS 8.0.2 only partially fixed the problems.


Tabini, Marco, and Marco Tabini. ‘Ios 7 Is Apple’s Fastest Growing Mobile Operating System’. Macworld. N.p., 2016. Web. 9 Feb. 2016.

Swider, Matt. ‘Ios 8 Review’. TechRadar. N.p., 2015. Web. 9 Feb. 2016.

http://cdn0.mos.techradar.futurecdn.net//art/mobile_phones/iPhone/iOS 8/ios-8-release-date-650-80.jpg



Magnetic tape data storage is a system for storing digital information on magnetic tape using digital recording. Modern magnetic tape is most commonly packaged in cartridges and cassettes. The device that performs writing or reading of data is a tape drive. Autoloaders and tape libraries automate cartridge handling. For example, a common cassette-based format is Linear Tape-Open, which comes in a variety of densities and is manufactured by several companies.

Sony announced in 2014 that they had developed a tape storage technology with the highest reported magnetic tape data density, 148 Gbit/in² (23 Gbit/cm²), potentially allowing tape capacity of 185 TB.

In May 2014 Fujifilm followed Sony and made an announcement that it will develop a 154 TB tape cartridge by the end of 2015, which will have the areal data density of storing 85.9 Gbit/in² (13.3 Gbit/cm²) on linear magnetic particulate tape


What is magnetic tape used for?

Magnetic tape is a medium for magnetic recording, made of a thin magnetizable coating on a long, narrow strip of plastic film. It was developed in Germany, based on magnetic wire recording. Devices that record and play back audio and video using magnetic tape are tape recorders and video tape recorders.


What is a tape drive used for?

A tape drive is a data storage device that reads and writes data on a magnetic tape. Magnetic tape data storage is typically used for offline, archival data storage. Tape media generally has a favorable unit cost and a long archival stability.




What is a magnetic storage?

Magnetic storage (or magnetic recording) is the storage of data on a magnetised medium. Magnetic storage uses different patterns of magnetisation in a magnetisable material to store data and is a form of non-volatile memory. The information is accessed using one or more read/write heads.


What is a magnetic memory?

magnetic memory storage definition. A system of storing information through the alignment of small grains in a magnetic material. Once the grains have been aligned by an external magnetic field, the information remains stored for long periods of time.

Anon, (2016). [online] Available at: http://magnetic tape data storage [Accessed 13 Mar. 2016].


Google.ae. (2016). magnetic tape data storage – بحث Google‏. [online] Available at: https://www.google.ae/search?q=magnetic+tape+data+storage&ie=utf-8&oe=utf-8&gws_rd=cr&ei=iOvkVvWDH8HP6ATc5564Cw [Accessed 13 Mar. 2016].




Holographic storage is a mass storage innovation that uses three-dimensional holographic pictures to empower more data to be put away in a much littler space. The innovation utilizes multi dimensional images which are made when a light from a solitary laser bar is part into two pillars; the sign bar (which conveys the information) and the reference bar. In holographic storage, at the point where the reference pillar and the information conveying signal shaft meet, the 3D image is recorded in the light delicate stockpiling medium.

Holographic memory is a method that can store data at high thickness inside gem photopolymers. As present storage systems, for example, dvd achieve the maximum furthest reaches of conceivable information density due to the diffraction constrained size of the written work shafts, holographic capacity can possibly turn into the up and coming era of capacity media. The benefit of this sort of information stockpiling is that the volume of the recording media is utilized rather than simply the surface

When you make a change in the reference bar point or media position then many one of a kind multi dimensional images can be recorded in the same volume of material. To peruse the put away holographic information, the reference shaft is avoided off the 3D image recreating the put away data. This multi dimensional image is then anticipated onto a finder that peruses the whole information page of more than one million bits without a moment’s delay.it is equipped for putting huge amount of information up to one tb in a cube sized crystal. Data from above 1000 cds can fit into a holographic memory system. Holographic capacity can possibly turn into the up and coming era of capacity media convectional recollections utilize just the surface to store the information. Be that as it may, the holographic information stockpiling system utilize the volume to store information.

How it works

At the point when the blue-green argon laser is let go, a beam makes two beam. One beam, called the article or flag shaft, will go straight, bob off one reflect and go through a spatial-light modulator (SLM). A SLM is a fluid precious stone showcase (LCD) that shows pages of crude double information as clear and dim boxes. The data from the page of twofold code is conveyed by the sign pillar around to the light-delicate lithium-niobate gem. A few frameworks utilize a photopolymer set up of the precious stone. A second pillar, called the reference bar, shoots out the side of the shaft splitter and takes a different way to the gem. At the point when the two shafts meet, the impedance design that is made stores the information conveyed by the sign pillar in a particular region in the precious stone – the information is put away as a visualization.

muhyiddeen bello





Holographic Memory








High performance computers (HPC) are computers with high-level computational capacity. As of the year 2014, there are high performance computers which could carry out quadrillions of floating point operations in a single second.

The utilization of multi-core processors coupled with centralization is an emerging development; this can be taught of as a small bunch (the multi-core processor in a tablet, Smartphone, etc.) that both contributes to and depends upon the cloud.
Systems with colossal numbers of processors usually follow one of two paths: In one approach (such as in distributed computing), a large number of distinct computers (e.g., laptops) distributed across a network (such as the Internet) allocate some or all of their time to finding solutions for  a common problem; each client (individual computer) receives and completes several small tasks, submitting the results to a central server which combines the task results from all the clients into the final solution. In the other approach, large numbers of dedicated processors are placed in close proximity to each other (such as in a computer cluster); this method saves considerable time spent in moving data around and makes it possible for the processors to work collectively (rather than separately), for example in hypercube and mesh architectures.
HPC’s are playing very important roles in the field of computational science, and are used for a wide variety of computationally demanding tasks in various fields, including weather forecasting, quantum mechanics, oil and gas exploration, climate research, and physical simulations (such as simulations of the early moments of the universe, spacecraft and airplane aerodynamics, nuclear fusion and the testing of nuclear weapons). (Riganati and Schneck, 1984)


High performance computing


High performance computing (HPC) is the utilization of parallel preparing for running advanced application programs productively, dependably and rapidly. The term applies particularly to systems that capacity over a teraflop or 1012 gliding point operations for every second. The term HPC is sometimes utilized as an equivalent word for supercomputing, although actually a supercomputer is a framework that performs at or close to the right now most noteworthy operational rate for PCs. A few supercomputers work at more than a petaflop or 1015 drifting point operations for each second.

The most widely recognized clients of HPC system are investigative scientists, architects and scholarly foundations. Some administration organizations, especially the military, additionally depend on HPC for complex applications. Elite frameworks regularly utilize uniquely crafted parts notwithstanding alleged merchandise segments. As interest for handling power and speed develops, HPC will probably intrigue organizations of all sizes, especially for exchange preparing and information stockrooms. An intermittent techno-savages may utilize a HPC framework to fulfill an excellent craving for cutting edge innovation.

Supercomputers are very quick and equipped for counts and running reproductions of stunning unpredictability. From the earliest plans until today, progressive machines have dependably been about enhancing the past machine, now and again by requests of size in rate and execution. PC engineers have possessed the capacity to make find every way possible to make PCs progressively quicker.

Grids and Clusters

Grid computing and cluster computing are two particular techniques for supporting HPC parallelism, which empowers applications that require more than one server. grid computing and cluster computing utilizing generally accessible servers and workstations has been normal in HPC for no less than two decades, and today they have the highest share of HPC workloads.


Whenever two or more PCs are associated and utilized together to support a solitary application, or a work process comprising of related applications, the associated framework is known as a cluster. cluster administration programming might be utilized to screen and deal with the cluster (for instance, to give shared access to the cluster by numerous clients in various divisions) or to deal with a mutual pool of software licenses over that same arrangement of clients, in consistence with software license permit terms.


clusters are most generally gathered using the same sort of PCs and CPUs, for instance a rack of product double or quad attachment servers associated utilizing superior system interconnects. A HPC group amassed along these lines may be utilized and improved for a solitary industrious application, or it may be worked as an oversaw and planned asset, in backing of an extensive variety of HPC applications. A typical normal for HPC clusters is that they profit by territory: HPC groups are regularly built to build the throughput and minimize the inactivity of information development between processing hubs, to information stockpiling gadgets, or both.


grid computing, which is some of the time called high throughput computing (HTC), varies from clusters computing in no less than two ways: area is not an essential prerequisite, and the span of the group can develop and recoil powerfully in light of the expense and accessibility of assets. grid can be gathered over a wide territory, maybe utilizing a heterogeneous accumulation of server and CPU sorts, or by “obtaining” save processing cycles from generally sit out of gear machines in an office situation, or over the Internet

A very good example of grid computing is the UC Berkeley SETI@home2 test, which uses a large number of Internet-associated PCs in the quest for extra-terrestrial insight (SETI). SETI@home volunteers take part by running a free program that downloads and dissects radio telescope information as a foundation process without intruding on the ordinary utilization of the volunteer’s PC. A comparable illustration of web-scale grid computing is the Stanford Folding@home3 venture, which likewise utilizes numerous a huge number of volunteers’ PCs to perform atomic level proteomics re-enactments helpful in disease research.


Comparative grid computing techniques can be used to circulate a PC helped outline (CAD) 3D rendering work crosswise over underutilized PCs in a building office environment, in this way diminishing or wiping out the need to buy and send a devoted CAD cluster.


Because of the dispersed way of grid computing, applications conveyed in this way should be intended for versatility. The surprising loss of one or more hubs in the grid must not bring about the disappointment of the whole registering work. Grid computing applications ought to additionally be on a level plane versatile, so they can exploit a discretionary number of associated PCs with close straight application speeding up.




















HP Moonshot is a new kind of server for new kind of enterprise. Before 1990s it was very difficult to store data and a large number of electricity is required and it was very complex to use. In order to handle 40000 searches a second, 16 million tweets in an hour, 900 million dollars transactions every minute and growing exponentially every day HP moonshot is specially designed for different workloads. HP moonshot is needs 89 percent less energy, it has 80 percent smaller footprint, and it is 77 percent cheaper and 97 percent less complex.

HP moonshot (Google images)
















HPE ProLiant XL730f Gen9 Server


HPE ProLiant XL730f Gen9 Server is the most innovative technology announced by HP. HPE ProLiant XL730f Gen9 Server is a step to make chill or less data centers and liquid cooling systems. The construction of HPE ProLiant XL730f Gen9 Server is it has two servers on one tray which give 4x teraflops/sq. ft. per rack. HPE ProLiant XL730f Gen9 Server gives 40 % more flops/watt 28% less energy than air-cooled systems which saves 3800 tons of co2   per year.  One solid state drive per server tray which is smaller in size and has higher capacity along with SSD HPE ProLiant XL730f Gen9 Server has 11 gig network card support per server tray which is integrated in front and rear nodes. FDR infiniBand is attached with each server tray which held HPCs users to handle fastest data traffic. Talking about the power adapter for HPE ProLiant XL730f Gen9 Server a 1200 watt power supply is required. High voltage DC is more efficient as compared to the other power distribution methods. Heat pipes are fixed with HPE ProLiant XL730f Gen9 Server and DDR4 memory RAM is associated with each server tray and it is covered with heat jackets, so the heat should not affect the RAM of the server. Working of heat pipes if to take out the heat from servers and supply back the cooling to the servers.

Following are some technical features of HPE ProLiant XL730f Gen9 Server:

  • SERVER: Choice of LFF or SFF Model.
  • CPU: Up to 2 Intel Xeon E5-2600v3 Processors.
  • NETWORK: Embedded dual-port 1 GbE.
  • STORAGE: Support for SAS, SATA, and SSDs.
  • SYSTEM CONFIGS: Up to 28 LFF or 50 SFF drives.
  • POWER: Redundant Common Slot 800W or 1400W Power Supply Units.
  • OPERATING SYSTEM: Microsoft (x64), Linux (x64).