KhaasIT has years of experience helping its customers to manage their computer equipment procurement. We help take the headaches out of ordering and tracking the lifecycle of computer equipment so you can focus on your business operations.
Hardware and Software procurement – KhaasIT has partnered with the major IT vendors and is able to bring our customers discounted pricing on most of the multinational brands in Server, Workstation, PCs, Laptops, networking products and software out there.
Contact KhaasIT today for a consultation and quote!
1. Which are the different kinds Computing hardware?
2. What is an OS and which are the different kinds of operating systems?
3. What is an application software and its various kinds?
4. How KhaasIT can help you with network infrastructure services?
In computer networking, a server is simply a program that operates as a socket listener. The term server is also often generalized to describe a host that is deployed to execute one or more such programs.
A server computer is a computer, or series of computers, that link other computers or electronic devices together. They often provide essential services across a network, either to private users inside a large organization or to public users via the internet. For example, when you enter a query in a search engine, the query is sent from your computer over the internet to the servers that store all the relevant web pages. The results are sent back by the server to your computer.
Many servers have dedicated functionality such as web servers, print servers, and database servers.Enterpriseservers are servers that are used in a business context.
The server is used quite broadly in information technology. Despite the many Server branded products available (such as Server editions of Hardware, Software and/or Operating Systems), in theory any computerized process that shares a resource to one or more client processes is a Server. To illustrate this, take the common example of File Sharing. While the existence of files on a machine does not classify it as a server, the mechanism which shares these files to clients by the operating system is the Server.
Similarly, consider a web server application (such as the multiplatform “Apache HTTP Server”). This web server software can be run on any capable computer. For example, while a laptop or Personal Computer is not typically known as a server, they can in these situations fulfill the role of one, and hence be labeled as one. It is in this case that the machine’s purpose as a web server classifies it in general as a Server.
In the hardware sense, the word server typically designates computer models intended for running software applications under the heavy demand of a network environment. In this client–server configuration one or more machines, either a computer or a computer appliance, share information with each other with one acting as a host for the other.
While nearly any personal computer is capable of acting as a network server, a dedicated server will contain features making it more suitable for production environments. These features may include a faster CPU, increased high-performance RAM, and typically more than one large hard drive. More obvious distinctions include marked redundancy in power supplies, network connections, and even the servers themselves.
Between the 1990s and 2000s an increase in the use of dedicated hardware saw the advent of self-contained server appliances. One well-known product is the Google Search Appliance, a unit which combines hardware and software in an out-of-the-box packaging. Simpler examples of such appliances include switches, routers, gateways, and print server, all of which are available in a near plug-and-play configuration.
Modern operating systems such as Microsoft Windows or Linux distributions rightfully seem to be designed with a client–server architecture in mind. These OS attempt to abstract hardware, allowing a wide variety of software to work with components of the computer. In a sense, the operating system can be seen as serving hardware to the software, which in all but low must interact using an API.
These operating systems may be able to run programs in the background called either services or daemons. Such programs may wait in a sleep state for their necessity to become apparent, such as the aforementioned Apache HTTP Server software. Since any software which provides services can be called a server, modern personal computers can be seen as a forest of servers and clients operating in parallel.
The Internet itself is also a forest of servers and clients. Merely requesting a web page from a few kilometers away involves satisfying a stack of protocols which involve many examples of hardware and software servers. The least of these are the routers, modems, domain name servers, and various other servers necessary to provide us the World Wide Web.
Hardware requirements for servers vary, depending on the server application. Absolute CPU speed is not usually as critical to a server as it is to a desktop machine .Servers’ duties to provide service to many users over a network lead to different requirements like fast network connections and high I/O throughput. Since servers are usually accessed over a network they may run in headless mode without a monitor or input device. Processes which are not needed for the server’s function are not used. Many servers do not have a graphical user interface (GUI) as it is unnecessary and consumes resources that could be allocated elsewhere. Similarly, audio and USB interfaces may be omitted.
Servers often run for long periods without interruption and availability must often be very high, making hardware reliability and durability extremely important. Although servers can be built from commodity computer parts, mission-critical servers use specialized hardware with low failure in order to maximize uptime. For example, servers may incorporate faster, higher-capacity hard drives, larger computer fans or water to help remove heat, and uninterruptible power supplies that ensure the servers continue to function in the event of a power failure. These components offer higher performance and reliability at a correspondingly higher price. Hardware redundancy—installing more than one instance of modules such as power supplies and hard disks arranged so that if one fails another is automatically available—is widely used. ECC memory devices which detect and correct errors are used; non-ECC memory is more likely to cause data corruption.
Servers are often rack-mounted and situated in server rooms for convenience and to restrict physical access for security.
Many servers take a long time for the hardware to start up and load the operating system. Servers often do extensive pre-boot memory testing and verification and startup of remote management services. The hard drive controllers then start up banks of drives sequentially, rather than all at once, so as not to overload the power supply with startup surges, and afterwards they initiate RAID system pre-checks for correct operation of redundancy. It is common for a machine to take several minutes to start up, but it may not need restarting for months or years.
Servers on the Internet
Almost the entire structure of the Internet is based upon a client–server model. High-level root name servers, DNS servers, and routers direct the traffic on the internet. There are millions of servers connected to the Internet, running continuously throughout the world.
A server, regardless of the type, needs a robust network to give it access to the Internet and allow it to communicate with other computers. Since a server normally has exceptionally high traffic loads when compared to a normal workstation or desktop PC, it requires a network structure robust enough to prevent a bottleneck. Most server networks use gigabit Ethernet, capable of transferring data at 1,000 Mbit/s. Despite this being a relatively high-speed, all of the parts needed to build such a network are available at most major computer and electronics stores.
Locate a gigabit networking switch (a router or hub will also work) in a central location. Remember that each computer will need a network line ran to this location, so make sure that it is practically positioned. Plug an Ethernet network cable into an empty port on the back of the switch for each computer on the network, including the server. Run the cables to each system, making sure that they are run in a location that is out of the way of equipment and pedestrians. Plug the Internet modem’s Ethernet cable into the switch if you want to share that as well.
Plug an Ethernet cable into the networking port located on the back of each computer, including the server. To take advantage of the gigabit speeds, each computer will need to have a gigabit-capable networking card. You can check to what kind of card a Windows-based computer has by right-clicking on “My Computer,” clicking “Properties,” then the “Hardware” tab, followed by “Device Manager.” Under “Network Adapters” the card type will be listed. If any of the computers don’t have one of these cards, one can be purchased and easily installed by the average computer user.
Run the network configuration tool on each computer. On a Windows-based workstation, this is the Network Setup Wizard, located in the Control Panel. The configuration tool will automatically detect the network and what types of computers are on it. You will be asked questions, such as what name you want to give the network, what name each computer should have, and if you want to share files and printers in addition to an Internet connection. Once the wizard has completed, restart each computer, and the server network is complete.
How to Setup a Network with a Server
Hand out the roles.
Select a machine you wish to use as the server and understand which systems you will use as clients to that server. Remember that the clients ask the server for certain things and the server then responds. It is common to dedicate a machine to being a server because a machine that is currently responding to your mouse movement or painting a web page for you may struggle to serve your client’s information at the same time.
Select the physical network.
All the computers must be able to converse in some way. If all the machines have Wi-Fi, then this may be all that’s necessary. A Macintosh running OS X 10.5, for instance, can be a network server across Wi-Fi without any additional hardware. You may, however, wish to use a network router or switch or wireless access point (WAP) to provide the physical link between your clients and servers. If you’re using an Ethernet switch or a network hub, ensure your machines can physically plug into the switch or hub ports using a cable that’s no longer than 100 meters or about 300 feet.
Select your protocol.
Most every personal computer sold since 2004 has included a networking protocol called TCP/IP. This is the linking protocol, or communication language, used for the Internet. It’s usually a safe assumption that TCP/IP may be used between your clients and your server. Ideally, your network servers give out the TCP/IP addresses to the clients using the Dynamic Host Configuration Protocol (DHCP). This allows for a simplified setup on each client.
Consider security provisions.
It’s important to consider whether the information on the server system should be specifically protected, and if so, to what degree it should be protected. If the network will be exposed to the Internet, it’s probably a good idea to add a firewall to the network–between the Internet connection and the server. If there are certain things on the server that one client should see and another should not, then consider deploying user accounts on the server and requiring the clients to identify themselves as they connect to the server. Content on the server could be restricted by user account.
Enable server software.
Having a network server means providing some type of information to clients on the network. Though most systems sold since 2004 have such software on them, that software is typically not active by default. Both Microsoft and Apple operating systems, for instance, can serve files and printers to network clients, but neither do so unless specifically instructed to do so. Ensure as you enable various server applications that you understand the security implications for that specific software, as it may not be true that protecting a file from one type of network access automatically protects it from all types of network access
A workstation is a high-end microcomputer designed for technical or scientific applications. Intended primarily to be used by one person at a time, they are commonly connected to a local and run multi-user operating systems. The term workstation has also been used to refer to a mainframe computer terminal or a PC connected to a network.
Historically, workstations had offered higher performance than personal computers, especially with respect to CPU and graphics, memory capacity and multitasking capability. They are optimized for the visualization and manipulation of different types of complex data such as 3D mechanical design, engineering simulation (e.g. computational fluid dynamics), animation and rendering of images, and mathematical plots. Consoles consist of a high resolution display, a keyboard and a mouse at a minimum, but also offer multiple displays, graphics tablets, 3D mice (devices for manipulating and navigating 3D objects and scenes), etc. Workstations are the first segment of the computer market to present advanced accessories and collaboration.
A desktop computer is a personal computer (PC) in a form intended for regular use at a single location, as opposed to a mobile laptop or portable. Prior to the widespread use of microprocessors, a computer that could fit on a desk was considered remarkably small. Desktop computers come in a variety of types ranging from large vertical tower cases to small form factor models that can be tucked behind a LCD monitor. “Desktop” can also indicate a horizontally-oriented computer case usually intended to have the display screen placed on top to save space on the desktop. Most modern desktop computers have separate screens and keyboards. Tower cases are desktop cases in the earlier sense, though not in the latter. Cases intended for home theater PC systems are usually considered to be desktop cases in both senses, regardless of orientation and placement.
A laptop is a personal computer designed for mobile use and small and light enough to sit on a person’s lap while in use. A laptop integrates most of the typical components of a desktop computer, including a display, a keyboard, a pointing device (a touchpad, also known as a track pad, and/or a pointing stick), speakers, and usually including a battery, into a single small and light unit. The rechargeable (if present) is charged from an AC adapter and typically stores enough energy to run the laptop for three to five hours in its initial state, depending on the configuration and power management of the computer.
Laptops are usually notebook-shaped with thicknesses between 0.7–1.5 inches (18–38 mm) and dimensions ranging from 10×8 inches (27x22cm, 13″ display) to 15×11 inches (39x28cm, 17″ display) and up. Modern laptops weigh 3 to 12 pounds (1.4 to 5.4 kg); older laptops were usually heavier. Most laptops are designed in the flip form factor to protect the screen and the keyboard when closed. Modern tablet laptops have a complex joint between the keyboard housing and the display, permitting the display panel to swivel and then lie flat on the keyboard housing.
Laptops were originally considered to be a small niche market and were thought suitable mostly for specialized field applications such as the military, accountants and sales representatives. But today, laptops are becoming more popular for student and general uses.
Manufacturers often refer to laptops as “notebooks”; this is to avoid warm thigh complaints from customers, and consequent opportunist liability suits.
A peripheral is a device attached to a host computer but not part of it, and is more or less dependent on the host. It expands the host’s capabilities, but does not form part of the core computer architecture.
Examples are computer printers, image scanners, tape drivers, microphones, loud speakers, webcams and digital cameras.
An operating system (OS) is a set of system software programs in a computer that regulate the ways application software programs use the computer hardware and the ways that users control the computer. For hardware functions such as input/output and memory space allocation, operating system programs act as an intermediary between application programs and the computer hardware, although application programs are usually executed directly by the hardware. Operating Systems is also a field of study within Applied Computer Science.
Operating systems are found on almost any device that contains a computer with multiple programs—from cellular phones and videogame consoles to supercomputers and web servers. Operating systems are two-sided platforms, bringing consumers (the first side) and program developers (the second side) together in a single market. Some popular modern operating systems for personal computers include Microsoft Windows, Mac OS X, and Linux.
Examples of operating systems
Server operating systems
Some popular operating systems for servers — such as FreeBSD, Solaris and Linux — are derived from or are similar to UNIX. UNIX was originally a minicomputer operating system, and as servers gradually replaced traditional minicomputers, UNIX was a logical and efficient choice of operating system. Many of these derived OSsare free in both senses, and popular.
Server-oriented operating systems tend to have certain features in common that make them more suitable for the server environment, such as GUI not available or optional
Ability to reconfigure and update both hardware and software to some extent without restart, advanced backup facilities to permit regular and frequent online backups of critical data, transparent data transfer between different volumes or devices, flexible and advanced networking capabilities, automation capabilities such as daemons in UNIX and services in Windows, and tight system security, with advanced user, resource, data, and memory protection.
Server-oriented operating systems can, in many cases, interact with hardware sensors to detect conditions such as overheating, processor and disk failure, and consequently alert an operator and/or take remedial measures itself.
Because servers must supply a restricted range of services to perhaps many users while a desktop computer must carry out a wide range of functions required by its user, the requirements of an operating system for a server are different from those of a desktop machine. While it is possible for an operating system to make a machine both provide services and respond quickly to the requirements of a user, it is usual to use different operating systems on servers and desktop machines. Some operating systems are supplied in both server and desktop versions with similar user interface.
The desktop versions of the Windows and Mac OS X operating systems are deployed on a minority of servers, as are some proprietary mainframe operating systems, such as z/OS. The dominant operating systems among servers are UNIX-based or open source kernel distributions, such as Linux (the kernel).
The rise of the microprocessor-based server was facilitated by the development of Unix to run on the x86 microprocessor architecture. The Microsoft Windows family of operating systems also runs on x86 hardware, and since Windows NT have been available in versions suitable for server use.
While the role of server and desktop operating systems remains distinct, improvements in the reliability of both hardware and operating systems have blurred the distinction between the two classes. Today, many desktop and server operating systems share similar code bases, differing mostly in configuration. The shift towards web applications and middleware platforms has also lessened the demand for specialist application servers.
Microsoft Windows is a family of proprietary operating systems most commonly used on personal computers. It is the most common family of operating systems for the personal computer, with about 90% of the market share. Currently, the most widely used version of the Windows family is Windows XP, released on October 25, 2001. The newest version is Windows 7 for personal computers and Windows Server 2008 R2 for servers.
It originated in 1981 as an add-on to the older MS-DOS operating system for the IBM PC. Released in 1985, Microsoft came to dominate the business world of personal computers, and went on to set a number of industry standards and commonplace applications. Beginning with Windows XP, all modern versions are based on the Windows NT kernel. Current versions of Windows run on IA-32 and x86-64 processors, although older versions sometimes supported other architectures.
Windows is also used on servers, supporting applications such as web servers and database servers. In recent years, Microsoft has spent significant marketing and research & development money to demonstrate that Windows is capable of running any enterprise application, which has resulted in consistent price/performance records (see the TPC) and significant acceptance in the enterprise market. However, its usage in servers is not as widespread as personal computers, and here Windows actively competes against Linux and BSD for market share, while still capturing a steady majority by some accounts.
Ken Thompson wrote B, mainly based on BCPL, which he used to write UNIX, based on his experience in the MULTICS project. B was replaced by C, and UNIX developed into a large, complex family of inter-related operating systems which have been influential in every modern operating system (see History). The Unix-like family is a diverse group of operating systems, with several major sub-categories including System V, BSD, and GNU/Linux. The name “UNIX” is a trademark of Group which licenses it for use with any operating system that has been shown to conform to their definitions. “Unix-like” is commonly used to refer to the large set of operating systems which resemble the original UNIX.
Unix-like systems run on a wide variety of machine architectures. They are used heavily for servers in business, as well as workstations in academic and engineering environments. Free Unix variants, such as GNU/Linux and BSD, are popular in these areas.
Some Unix variants like HP’s HP-UX and IBM’s AIX are designed to run only on that vendor’s hardware. Others, such as Solaris, can run on multiple types of hardware, including x86 servers and PCs. Apple’s Mac OS X, a hybrid kernel-based BSD variant derived from NeXTSTEP, Mach, and FreeBSD, has replaced Apple’s earlier (non-Unix) Mac OS.
Unix interoperability was sought by establishing the POSIX standard. The POSIX standard can be applied to any operating system, although it was originally created for various Unix variants.
BSD and its descendants
The first server for the World Wide Web ran on NeXTSTEP, based on BSD.
Berkeley Software Distribution
A subgroup of the Unix family is the Berkeley Software Distribution family, which includes FreeBSD, NetBSD, and OpenBSD. These operating systems are most commonly found on webservers, although they can also function as a personal computer OS. The internet owes much of its existence to BSD, as many of the protocols now commonly used by computers to connect, send and receive data over a network were widely implemented and refined in BSD. The World Wide Web was also first demonstrated on a number of computers running an OS based on BSD called NextStep.
BSD has its roots in Unix. In 1974, University of California , Berkeley installed its first Unix system. Over time, students and staff in the computer science department there began adding new programs to make things easier, such as text editors. When Berkely received new VAX computers in 1978 with Unix installed, the school’s undergraduates modified Unix even more in order to take advantage of the computer’s hardware possibilities. The Defense Advanced Research Projects Agency of the US Department of Defense took interest, and decided to fund the project. Many schools, corporations, and government organizations took notice and started to use Berkeley ‘s version of Unix instead of the official one distributed by AT&T. Steve Jobs, upon leaving Apple Inc. in 1985, formed NeXT Inc., a company that manufactured high-end computers running on a variation of BSD called NeXTSTEP. One of these computers was used by Tim Berners-Lee as the first webserver to create the World Wide Web.
Developers like Keith Bostic encouraged the project to replace any non-free code that originated with Bell Labs. Once this was done, however, AT&T sued. Eventually, after two years of legal disputes, the BSD project came out ahead and spawned a number of free derivatives, such as FreeBSD and NetBSD. However, the two year wait had set the stage for two projects that would ultimately eclipse both BSD and Unix: GNU and Linux.
Mac OS X
Mac OS X is a line of partially proprietary graphical operating systems developed, marketed, and sold by Apple Inc., the latest of which is pre-loaded on all currently shipping Macintosh computers. Mac OS X is the successor to the original Mac OS, which had been Apple’s primary operating system since 1984. Unlike its predecessor, Mac OS X is a UNIX operating system built on technology that had been developed at NeXT through the second half of the 1980s and up until Apple purchased the company in early 1997.
The operating system was first released in 1999 as Mac OS X Server 1.0, with a desktop-oriented version (Mac OS X v10.0) following in March 2001. Since then, six as more distinct “client” and “server” editions of Mac OS X have been released, the most recent being Mac OS X v10.6, which was first made available on August 28, 2009. Releases of Mac OS X are named after big cats; the current version of Mac OS X is “Snow Leopard”.
The server edition, Mac OS X Server, is architecturally identical to its desktop counterpart but usually runs on Apple’s line of Macintosh server hardware. Mac OS X Server includes work group management and administration software tools that provide simplified access to key network services, including a mail transfer agent, a Samba server, an LDAP server, a domain name server, and others.
Ken Thompson, Dennis Ritchie and Douglas McIlroy at Bell Labs designed and developed the C programming language to build the operating system Unix. Programmers at Bell Labs went on to develop Plan 9 and Inferno, which were engineered for modern distributed environments. Plan 9 was designed from the start to be a networked operating system, and had graphics built-in, unlike Unix, which added these features to the design later. It is currently released under the Lucent Public License. Inferno was sold to Vita Nuova Holdings and has been released under a GPL/MIT license.
Linux and GNU
Linux is the generic name for a UNIX like operating system that can be used on a wide range of devices from supercomputers to wristwatches. The Linux kernel is released under an open source license, so anyone can read and modify its code. It has been modified to run on a large variety of electronics. Although estimates suggest it is used on only 0.5-2% of all personal computers, it has been widely adopted for use in servers and embedded systems (such as cell phones). Linux has superseded UNIX in most places, and is used on the 10 most powerful supercomputers in the world.
The GNU project is a mass collaboration of programmers who seek to create a completely free and open operating system that was similar to UNIX but with completely original code. It was started in 1983 by Richard Stallman, and is responsible for many of the parts of most Linux variants. For this reason, Linux is often called GNU/Linux. Thousands of pieces of software for virtually every operating system are licensed under the GNU General Public License. Meanwhile, the Linux kernel began as a side project of Linus Torvalds, a university student from Finland . In 1991, Torvalds began work on it, and posted information about his project on a newsgroup for computer students and programmers. He received a wave of support and volunteers who ended up creating a full-fledged kernel. Programmers from GNU took notice, and members of both projects worked to integrate the finished GNU parts into the Linux kernel in order to create a full-fledged operating system.
Google Chrome OS
Chrome is an operating system based on the Linux kernel and designed by Google. Chrome targets computer users that spend most of their time on the internet—it is technically only a web browser with no other applications, and relies on internet applications used in the web browser to accomplish tasks such as word processing and media viewing.
Older operating systems which are still used in niche markets include OS/2 from IBM and Microsoft; Mac OS, the non-Unix precursor to Apple’s Mac OS X; BeOS; XTS-300. Some, most notably RISC OS, MorphOS and AmigaOS 4 continue to be developed as minority platforms for enthusiast communities ) and specialist applications. OpenVMS formerly from DEC, is still under active development by Hewlett-Packard. Yet other operating systems are used almost exclusively in academia, for operating systems education or to do research on operating system concepts. A typical example of a system that fulfills both roles is MINIX, while for example Singularity is used purely for research.
Application software, also known as an application, is computer software designed to help the user to perform singular or multiple related specific tasks. Examples include enterprise software, accounting software, office suites, graphics software and media players.
Application software is contrasted with system software and middleware, which manage and integrate a computer’s capabilities, but typically do not directly apply them in the performance of tasks that benefit the user. A simple, if imperfect analogy in the world of hardware would be the relationship of an electric light bulb (an application) to an electric power generation plant (a system). The power plant merely generates electricity, not itself of any real use until harnessed to an application like the electric light that performs a service that benefits the user.
Application software classification
There are many types of application software:
An application suite consists of multiple applications bundled together. They usually have related functions, features and user interfaces, and may be able to interact with each other, e.g. open each other’s files. Business applications often come in suites, e.g. Microsoft Office, OpenOffice.org, and iWork, which bundle together a word processor, a spreadsheet, etc.; but suites exist for other purposes, e.g. graphics or music.
Enterprise software addresses the needs of organization processes and data flow, often in a large distributed environment. (Examples include Financial, Customer Relationship Management, and Supply Chain Management). Note that Departmental Software is a sub-type of Enterprise Software with a focus on smaller organizations or groups within a large organization. (Examples include Travel Expense Management and IT Helpdesk)
Enterprise infrastructure software provides common capabilities needed to support enterprise software systems. (Examples include Databases, Email servers, and Network and Security Management)
Information worker software addresses the needs of individuals to create and manage information, often for individual projects within a department, in contrast to enterprise management. Examples include time management, resource management, documentation tools, analytical, and collaborative. Word processors, spreadsheets, email and blog clients, personal information system, and individual media editors may aid in multiple information worker tasks.
Content access software is software used primarily to access content without editing, but may include software that allows for content editing. Such software addresses the needs of individuals and groups to consume digital entertainment and published digital content. (Examples include Media Players, Web Browsers, Help browsers, and Games)
Educational software is related to content access software, but has the content and/or features adapted for use in by educators or students. For example, it may deliver evaluations (tests), track progress through material, or include collaborative capabilities.
Simulation software is computer software for simulation of physical or abstract systems for research, training or entertainment purposes.
Media development software addresses the needs of individuals who generate print and electronic media for others to consume, most often in a commercial or educational setting. This includes Graphic Art software, Desktop Publishing software, Multimedia Development software, HTML editors, Digital Animation editors, Digital Audio and Video composition, and many others.
Product engineering software is used in developing hardware and software products. This includes computer aided design (CAD), computer aided engineering (CAE), computer language editing and compiling tools, Integrated Development Environments, and Application Programmer Interfaces.