The Advent Of Computers Processes Computer Science

Essay add: 11-01-2017, 15:02   /   Views: 5

During the advent of computers processes began to be defined in increasing levels of automation. Earlier on operating systems processed these activities in increasing levels of automation. It became possible for instance to carry out job scheduling by defining the tasks the computer could automatically execute using a collection of files and data what came to be known as batch processing. However a time element to this processing critically redefined this kind of automated processing resulting into what became online and real time processing. This were defined as so because of the active influence of the processor during execution. This therefore meant that the operating systems as package of programs used to achieve this automation would need to be redefined to allow more components within them to take care of the real time requirements during processing. Presently most processes remain real time based although in some setups they may still be considered batched based on the frequency of output from these processes. During these processes the operating system plays an important and critical role to ensure that these processes proceed in an almost fully automated mode. The operating system as a piece of complex software ensures that processing proceeds optimally. The operating system also provides the mechanisms of recovery for data and file in the event that a critical system error should occur. This paper is going to cover how the windows operating system attempts to address some of the vital and inherent functions within a computer system to ensure optimized processing.

Some of the areas that will be covered are such as memory management, file management, process and threads, input and output devices management, security and data protection among others.

Typically an application can be defined by one or more processes. In this case a process is just an executing program. A process may therefore constitute of one or more thread depending on its complexity. These threads define the basic units that must receive a share of the processor time as determined by the operating system. It is therefore the onus of the windows operating system to determine which threads are executed when. These threads are able to execute portions of the process code as well as portions currently under execution by other threads. The operating system also identifies the thread pools which are often used by running applications to initiate call backs on behalf of that particular application. The windows operating system will synchronize these functions to ensure that execution is accurately passed between the various threads to ensure that the application runs properly. Invalid calls are also noted by the operating system and a log can be made of such with forced program termination if the action is fatal.

The windows operating system makes use of the thread pools to reduce the number of application threads thereby providing some form of process and thread management.

Basically since threads define the smallest processing unit schedulable by the windows operating system, for single processor setups multithreading will be achieved by employing the multitasking function by the operating system which is achieved through time division multiplexing whereby the windows operating system directs the processor to switch between the threads. Operating systems like windows will support multiprocessor threading using a process scheduler. From within the kernel of the operating system, programmers can manipulate the threads using calls. Preemptive multithreading as one of the ways the operating system handles threads necessitates that the operating system determine when a switch should take place. Cooperative multithreading as the other way involves the thread itself having he ability to transfer control. The operating system and in this case windows plays a very crucial role in process and thread management. The operating system will allocate resources such as memory, file and device handles, sockets among others to optimize processing.

One of the most important resources in any system is the memory. The efficiency of a system largely depends on how the available memory is managed. This has resulted in technologies supportive of this garbage collection principle to free memory and make efficient the processes. Windows as an operating system carries out the memory management task to ensure optimal processing. The windows operating system makes use of a page file that is dynamically allocated to manage memory. This page file which is normally allocated on disk will free random access memory (RAM) for actively used objects during the processing. To further increase efficiency during callback, windows allocates blocks on the disk which can still be made efficient through a defragementation process. The windows operating system can also be configured to store the page file on a different partition or disk.

Windows operating system has continued to record high incidences of malware unlike their open source counterpart Linux. Typically malware attacks resulting from networks of infected computers under the command of a malicious personremain a common occurrence and threat to a window based environment. The windows operating system has no operating system oriented malware component to handle such threats. User of windows operating systems are advised to install third party malware components to address the threats targeting their systems. At this point it is necessary to note that Linux malware incidences are much lower as compared to windows. The occurrence of malware on Linux based systems remains a very rare occurrence indeed. Linux have tools such as ClamAv an panda security desktop secure which are used to filter window based malware from emails as well as network traffic on Linux based networks.

The windows operating system initially operated on the file allocation table scheme (FAT). This scheme did not support file permissions which are needed to uphold security of data within the system. However the present versions of the windows operating system which are NT based make use of newer technology the new technology file system (NTFS). This feature makes use of NTFS based access control lists to enable a system administrator to grant permission using tokens. This feature makes the NT based windows operating system rich in file system permission procedures that can be explored by the administrator to allocate permission to access and use of system resources.

By using tokens the system can categorize user based on roles or tasks they are expected to perform. Their credentials are held in an encrypted log on file that is continuously referenced whenever they make any application or log onto the system. The windows operating system initiates standard user permission for every logged in session thereby ensuring that no malicious programs are run on the system. The user account control framework remains instrumental in achieving this security.

Overall throughput within a system is also determined by how efficient the input and output devices are managed by the system. Until recently, system basic input output systems (BIOS) have been the standard for defining firmware interface. Boot firmware is the first code executed when the computer is powered on. The BIOS identify the system devices such as the hard disk, keyboard, mouse, DVD drive, video display card among other hardware. The BIOS then interface with the operating system on a boot device such as the hard disk eventually loading and giving control to the operating system. The BIOS has a library of basic input and output procedures that can be used to operate as well as control peripherals. Though BIOS remain in widespread use they are being replaced by extensible firmware interface (EFI) allowing for input and output devices to interface with the operating system through device drives.

It is expedient that an operating system is in control especially during multi-user processing. This is to ensure that concurrent read and write to data within a database remains consistent especially in a multi-access environment. The windows operating system ensures database consistency by the employment of read and write locks on pieces of data in a multi user multiprocessing environment. This is necessary in order to prevent deadlock which can affect overall system efficiency.

The windows operating system also runs a scheduler that assigns the processes to available processors. The goal in scheduling is to optimize processor time. The types of operating system schedulers maybe long term, mid term or long term schedulers. These schedulers makes use of scheduling algorithms such as first in first out (FIFO), shortest job first, priority based, round robin and multi level queue. Windows NT based operating system use the multi level feedback queue where priority for each task is adjusted based on its input and output requirements as well as processor usage.

Most operating systems windows included provide ways of initiating redundancy array of inexpensive disks (RAID). This is a data storage scheme which is able to divide and distribute in replica data between a number of disks. This is aimed at increasing data reliability as well as increase input and output performance. The windows operating system ensure data protection through the data protection through the data protection application programming interface (DPAPI). This is basically an encryption process that enables the windows operating system to generate cryptographic keys making use of keys and passwords.

Article name: The Advent Of Computers Processes Computer Science essay, research paper, dissertation