22, సెప్టెంబర్ 2022, గురువారం

 

What is Inter Process Communication?

In general, Inter Process Communication is a type of mechanism usually provided by the operating system (or OS). The main aim or goal of this mechanism is to provide communications in between several processes. In short, the intercommunication allows a process letting another process know that some event has occurred.

Let us now look at the general definition of inter-process communication, which will explain the same thing that we have discussed above.

Definition

"Inter-process communication is used for exchanging useful information between numerous threads in one or more processes (or programs)."

To understand inter process communication, you can consider the following given diagram that illustrates the importance of inter-process communication:

Role of Synchronization in Inter Process Communication

It is one of the essential parts of inter process communication. Typically, this is provided by interprocess communication control mechanisms, but sometimes it can also be controlled by communication processes.

These are the following methods that used to provide the synchronization:

  1. Mutual Exclusion
  2. Semaphore
  3. Barrier
  4. Spinlock

Mutual Exclusion:-

It is generally required that only one process thread can enter the critical section at a time. This also helps in synchronization and creates a stable state to avoid the race condition.

Semaphore:-

Semaphore is a type of variable that usually controls the access to the shared resources by several processes. Semaphore is further divided into two types which are as follows:

  1. Binary Semaphore
  2. Counting Semaphore

Barrier:-

A barrier typically not allows an individual process to proceed unless all the processes does not reach it. It is used by many parallel languages, and collective routines impose barriers.

Spinlock:-

Spinlock is a type of lock as its name implies. The processes are trying to acquire the spinlock waits or stays in a loop while checking that the lock is available or not. It is known as busy waiting because even though the process active, the process does not perform any functional operation (or task).

Approaches to Interprocess Communication

We will now discuss some different approaches to inter-process communication which are as follows:

What is Inter Process Communication

These are a few different approaches for Inter- Process Communication:

  1. Pipes
  2. Shared Memory
  3. Message Queue
  4. Direct Communication
  5. Indirect communication
  6. Message Passing
  7. FIFO

To understand them in more detail, we will discuss each of them individually.

Pipe:-

The pipe is a type of data channel that is unidirectional in nature. It means that the data in this type of data channel can be moved in only a single direction at a time. Still, one can use two-channel of this type, so that he can able to send and receive data in two processes. Typically, it uses the standard methods for input and output. These pipes are used in all types of POSIX systems and in different versions of window operating systems as well.

Shared Memory:-

It can be referred to as a type of memory that can be used or accessed by multiple processes simultaneously. It is primarily used so that the processes can communicate with each other. Therefore the shared memory is used by almost all POSIX and Windows operating systems as well.

Message Queue:-

In general, several different messages are allowed to read and write the data to the message queue. In the message queue, the messages are stored or stay in the queue unless their recipients retrieve them. In short, we can also say that the message queue is very helpful in inter-process communication and used by all operating systems.

To understand the concept of Message queue and Shared memory in more detail, let's take a look at its diagram given below:

What is Inter Process Communication

Message Passing:-

It is a type of mechanism that allows processes to synchronize and communicate with each other. However, by using the message passing, the processes can communicate with each other without restoring the hared variables.

Usually, the inter-process communication mechanism provides two operations that are as follows:

  • send (message)
  • received (message)

Note: The size of the message can be fixed or variable.

Direct Communication:-

In this type of communication process, usually, a link is created or established between two communicating processes. However, in every pair of communicating processes, only one link can exist.

Indirect Communication

Indirect communication can only exist or be established when processes share a common mailbox, and each pair of these processes shares multiple communication links. These shared links can be unidirectional or bi-directional.

FIFO:-

It is a type of general communication between two unrelated processes. It can also be considered as full-duplex, which means that one process can communicate with another process and vice versa.

Some other different approaches

  • Socket:-

It acts as a type of endpoint for receiving or sending the data in a network. It is correct for data sent between processes on the same computer or data sent between different computers on the same network. Hence, it used by several types of operating systems.

  • File:-

A file is a type of data record or a document stored on the disk and can be acquired on demand by the file server. Another most important thing is that several processes can access that file as required or needed.

  • Signal:-

As its name implies, they are a type of signal used in inter process communication in a minimal way. Typically, they are the massages of systems that are sent by one process to another. Therefore, they are not used for sending data but for remote commands between multiple processes.

Usually, they are not used to send the data but to remote commands in between several processes.

Why we need interprocess communication?

There are numerous reasons to use inter-process communication for sharing the data. Here are some of the most important reasons that are given below:

  • It helps to speedup modularity
  • Computational
  • Privilege separation
  • Convenience
  • Helps operating system to communicate with each other and synchronize their actions as well.

Note: IPC cannot be considered a solution to all problems but what is important is that it does its job very well.

 

18, సెప్టెంబర్ 2022, ఆదివారం

 

Examples of Real-time Operating Systems

author is ramu@madras , some of the concepts form the manuscripts of class

Real-time operating systems(RTOS) are used in situations to handle real-life scenarios. A few examples of real-time operating systems include:
1. VxWorks: This OS is part of the Mars 2020 rover (2020 launch). Also, in the past, it was used in Phonix Mars lander, Boing 787, Honda Robot ASIMO etc.
2. QNX: QNX Neutrino TROS finds widespread use in embedded systems. Thus, it is compatible with platforms like ARM and x86. Industries using QNS are automotive, railway transportation and health-care.
3. eCos: eCos is an open-source real-time operating system. Example of eCos use is Chibis-M microsatellite‘s attitude and stabilization control system. 
4. RTLinux: RTLinux is a hard real-time operating system. It runs the Linux operating system as a full preemptive process. As a result, it is useful in controlling robots, data acquisition systems, manufacturing plants

What is a Real-Time Operating System(RTOS)?

A real-time operating system is a time crucial operating system. This means that the response to any event must come in a specified time interval only. Hence, a delay in response will result in disastrous effects.

For instance, an operating system like Windows10 which comes as the default OS in most laptops or PCs is not a real-time operating system. The reason is that even if your application say, VLC player starts with a delay it is not going to have any harmful impact. On the contrary, an OS used in an aircraft is a real-time operating system because if the landing gears do not come out in a specified interval after issuing the command, the aircraft will crash.

So, all situations like Red-light crossing, autonomous cars etc., are all real-life operating systems. An RTOS can be event-driven or time-sharing. In event-driven strategy, the OS shifts tasks only if a higher priority task requires servicing. In time-sharing, tasks are shifted in a round-robin fashion based on time quantum.

Types of Real-time operating systems

1. Soft Real Time OS

A Soft RTOS is a system in which the deadline for certain tasks can be delayed to some extent. For example, if the task deadline is 1:20:30PM, then the task can on occasions complete at let us say 1:20:35PM every. However, it can not delay for too long say 1:30PM.

2. Hard Real Time OS

A Hard RTOS is a system which meets the deadline for every process at all times. For example, if the task deadline is 1:20:30PM, then the task has to complete before 1:20:30PM every time.

Differences between Soft RTOS and Hard RTOS

CharacteristicHard RTOSSoft RTOS
Response TimeStrict deadlinesoft deadline
SafetyCriticalNot critical
Data IntegrityShort termLong Term
Error DetectionAutomaticUser assisted

Scheduling Algorithms for RTOS

CPU scheduling algorithms used for RTOS are also different from normal CPU scheduling algorithms like FCFS, SJF etc. For example, a couple of popular algorithms are:
1. Rate Monotonic scheduling
2. Earliest Deadline First scheduling

13, సెప్టెంబర్ 2022, మంగళవారం

 Text Books:

1) Cryptography and Network Security, Behrouz A Forouzan, DebdeepMukhopadhyay, (3e) Mc Graw Hill.

2) Cryptography and Network Security, William Stallings, (6e) Pearson.

3) Everyday Cryptography, Keith M.Martin, Oxford.

Reference Books:

1) Network Security and Cryptography, Bernard Meneges, Cengage Learning

Course Outcomes:

  • To be familiarity with information security awareness and a clear understanding of its importance.
  • To master fundamentals of secret and public cryptography
  • To master protocols for security services
  • To be familiar with network security threats and countermeasures
  • To be familiar with network security designs using available secure solutions (such as PGP,
  • SSL, IPSec, etc)



9, సెప్టెంబర్ 2022, శుక్రవారం

GOOGLEMEET. ppt presentation problem eraised, but putting efforts solved , delayed but not stopped

 gmail(100), gsuit(250).

faced problem at the time of interviewing

video conference- date,time,discription,save- send invitation,  link copy paste in gmail or whatsup

start meeting from the link - join with meet, join new,(start new meeting, start instant meeting)

ppt presentation [settings audio, video, general resolution ], present now, entaire screen , u can chat aslo

never share entair screen or browser window, it cause mirroring , so usen window or tab only share

audio, video,cc , arrow(PPT),three dots

at the end , stop presentation , leave the call . for presentation of slides envato elements select this for free, graphic river@paid,  from this premium template.



MIOT INTL HOSP TO MARINA MALL , NAVLUR. GOING AND COMMING BACK LANDMARKS

              Coming back

Holiday inn STRAIGHT- national fashion techonology-LEFT TURN (CURVED PATH coming back)ramco systems-IIT M main gate -rajbhavan junctionGUINDY NATIONAL PARK-ITC GRAND CHOLA- idbi bank- tamilnadu news print- bsnl exchange-LEFT

GOING lee royal meridian – spic-littlemount metro-vasant&co , patel statue-highways research institute- arumuga valli ammal temple

                Hap daily- iit flyover- end take RIGHT TURN (Madhya kailash signal) temple also. RAJIV IT corridor- STRAIGHT

 

 


 

8, సెప్టెంబర్ 2022, గురువారం

from SRM to vels

 From ramapuram Miot intl hospital backside

Ramapuram singnal-miot hospital- butt road – guindyflyovers

Go straight road u will see the following landmarks

Sarvana bavan

Radison blue hotel

Trident hotel

Santhi service station

Meenambakkam metro

Sbi

Pvr grand galada Chennai

Drdo transit facility

Csi church

Hotel nk grand park

Pallavaram bustand(left in the middle )

Bismillah king beef hotel

Flyover surabhi wedding hall

Leftside vels univ road

 

 

While coming back 

From vels university - take right and surabhi function hall move towards -> airport metro – naganallur metro- st thromusmount

Road take right turn  you will reach inside nadam bakam trade center no need to go guindy flyover it is easy





4, సెప్టెంబర్ 2022, ఆదివారం

TOMORROW CLASS IS ABOUT THE MULTITHREADING IN JAVA WAITING FOR THE EXPLANATION

 an example getCurrentThreadName(), main[thread groupname, priority, childthreadname]

ThreadGroup in Java

Java provides a convenient way to group multiple threads in a single object. In such a way, we can suspend, resume or interrupt a group of threads by a single method call.

Note: Now suspend(), resume() and stop() methods are deprecated.

Java thread group is implemented by java.lang.ThreadGroup class.

A ThreadGroup represents a set of threads. A thread group can also include the other thread group. The thread group creates a tree in which every thread group except the initial thread group has a parent.

1)ThreadGroup(String name)creates a thread group with given name.
2)ThreadGroup(ThreadGroup parent, String name)creates a thread group with a given parent group and name.

A thread is allowed to access information about its own thread group, but it cannot access the information about its thread group's parent thread group or any other thread groups.

  1. ThreadGroup tg1 = new ThreadGroup("Group A");   
  2. Thread t1 = new Thread(tg1,new MyRunnable(),"one");     
  3. Thread t2 = new Thread(tg1,new MyRunnable(),"two");     
  4. Thread t3 = new Thread(tg1,new MyRunnable(),"three");    

Now all 3 threads belong to one group. Here, tg1 is the thread group name, MyRunnable is the class that implements Runnable interface and "one", "two" and "three" are the thread names.

Now we can interrupt all threads by a single line of code only.

  1. Thread.currentThread().getThreadGroup().interrupt();  













The reality of multi-core hardware has made concurrent programs pervasive. Unfortunately, writing correct concurrent programs is difficult. Atomicity violation, which is caused by concurrent executions unexpectedly violating the atomicity of a certain code region, is one of the most common concurrency errors., atomicity violation bugs



HTM/STM.

Transactional memory originated in database theory, provides an alternative strategy for process synchronization.

A memory transaction is atomic is a sequence of memory read–write operations. The memory transaction is committed, if all operations in a transaction are completed. Otherwise, the operations must be aborted and rolled back. The ease of transactional memory can be obtained through features added to a programming language. Consider an example. Suppose we have a function update() that modifies shared data. Traditionally, this function would be written using mutex locks (or semaphores) such as the following −

void update (){
   acquire(); /* modify shared data */
   release();
}

However, using synchronization mechanisms such as mutex locks and semaphores involves many potential problems, including deadlock. Additionally, as the number of threads increases, traditional locking scales less well, because the level of contention among threads for lock ownership becomes very high. As an alternative to traditional locking methods, new features that take advantage of transactional memory can be added to a programming language. In our example, suppose we add the construct atomic{S}, which ensures that the operations in S execute as a transaction. This allows us to rewrite the update() function as follows −

void update (){
   atomic {
      /* modify shared data */
   }
}

The advantage of using such a mechanism rather than locks is that the transactional memory system—not the developer-is responsible guaranteeing atomicity. Additionally, because no locks are involved, deadlock is not possible. Furthermore, a transactional memory system can identify which statements in atomic blocks can be executed concurrently, such as concurrent read access to a shared variable. It is, of course, possible for a programmer to identify these situations and use reader-writer locks, but the task becomes increasingly difficult as the number of threads within an application grows. Transactional memory can be implemented in either software or hardware. Software transactional memory(STM), which implements transactional memory exclusively in software—no special hardware is needed. It works by inserting instrumentation code inside transaction blocks. The code is inserted by a compiler and manages each transaction by examining where statements may run concurrently and where specific low-level locking is required. Hardware transactional memory (HTM) uses hardware cache hierarchies and cache coherency protocols to manage and resolve conflicts involving shared data residing in separate processors’ caches. It requires no special code instrumentation and thus has less overhead than STM. However, HTM does require that existing cache hierarchies and cache coherency protocols be modified to support transactional memory. Transactional memory has existed for several years without widespread implementation. However, the growth of multicore systems and the associated emphasis on concurrent and parallel programming have prompted a significant amount of research in this area on the part of both academics and commercial software and hardware vendors.

semaphores ,BINARY,COUNTING , short notes

 

 Dijkestra proposed a significant technique for managing concurrent processes for complex mutual exclusion problems. He introduced a new synchronization tool called Semaphore.

Semaphores are of two types −

  1. Binary semaphore

  2. Counting semaphore

Binary semaphore can take the value 0 & 1 only. Counting semaphore can take nonnegative integer values.

Two standard operations, wait and signal are defined on the semaphore. Entry to the critical section is controlled by the wait operation and exit from a critical region is taken care by signal operation. The wait, signal operations are also called P and V operations. The manipulation of semaphore (S) takes place as following:

  1. The wait command P(S) decrements the semaphore value by 1. If the resulting value becomes negative then P command is delayed until the condition is satisfied.

  2. The V(S) i.e. signals operation increments the semaphore value by 1.

Mutual exclusion on the semaphore is enforced within P(S) and V(S). If a number of processes attempt P(S) simultaneously, only one process will be allowed to proceed & the other processes will be waiting.These operations are defined as under −

P(S) or wait(S): 
If S > 0 then
   Set S to S-1
Else
   Block the calling process (i.e. Wait on S)

V(S) or signal(S): 
If any processes are waiting on S
   Start one of these processes
Else
   Set S to S+1

The semaphore operation are implemented as operating system services and so wait and signal are atomic in nature i.e. once started, execution of these operations cannot be interrupted.

Thus semaphore is a simple yet powerful mechanism to ensure mutual exclusion among concurrent processes.

OPERATING SYSTEMS MY CLASS NOTES, MAJORLY FROM GALVIN

 

 

Operating System Tutorial

An Operating System (OS) is a collection of software that manages computer hardware resources and provides common services for computer programs. When you start using a Computer System then it's the Operating System (OS) which acts as an interface between you and the computer hardware. The operating system is really a low level Software which is categorised as a System Software and supports a computer's basic functions, such as memory management, tasks scheduling and controlling peripherals etc.

This simple and easy tutorial will take you through step by step approach while learning Operating System concepts in detail.

What is Operating System?

An Operating System (OS) is an interface between a computer user and computer hardware. An operating system is a software which performs all the basic tasks like file management, memory management, process management, handling input and output, and controlling peripheral devices such as disk drives and printers.

Generally, a Computer System consists of the following components:

  • Computer Users are the users who use the overall computer system.
  • Application Softwares are the softwares which users use directly to perform different activities. These softwares are simple and easy to use like Browsers, Word, Excel, different Editors, Games etc. These are usually written in high-level languages, such as Python, Java and C++.
  • System Softwares are the softwares which are more complex in nature and they are more near to computer hardware. These software are usually written in low-level languages like assembly language and includes Operating Systems (Microsoft Windows, macOS, and Linux), Compiler, and Assembler etc.
  • Computer Hardware includes Monitor, Keyboard, CPU, Disks, Memory, etc.

So now let's put it in simple words:

If we consider a Computer Hardware is body of the Computer System, then we can say an Operating System is its soul which brings it alive ie. operational. We can never use a Computer System if it does not have an Operating System installed on it.

Operating System - Examples

There are plenty of Operating Systems available in the market which include paid and unpaid (Open Source). Following are the examples of the few most popular Operating Systems:

  • Windows: This is one of the most popular and commercial operating systems developed and marketed by Microsoft. It has different versions in the market like Windows 8, Windows 10 etc and most of them are paid.
  • Linux This is a Unix based and the most loved operating system first released on September 17, 1991 by Linus Torvalds. Today, it has 30+ variants available like Fedora, OpenSUSE, CentOS, UBuntu etc. Most of them are available free of charges though you can have their enterprise versions by paying a nominal license fee.
  • MacOS This is again a kind of Unix operating system developed and marketed by Apple Inc. since 2001.
  • iOS This is a mobile operating system created and developed by Apple Inc. exclusively for its mobile devices like iPhone and iPad etc.
  • Android This is a mobile Operating System based on a modified version of the Linux kernel and other open source software, designed primarily for touchscreen mobile devices such as smartphones and tablets.

Some other old but popular Operating Systems include Solaris, VMS, OS/400, AIX, z/OS, etc.

Operating System - Functions

To brief, Following are some of important functions of an operating System which we will look in more detail in upcoming chapters:

  • Process Management
  • I/O Device Management
  • File Management
  • Network Management
  • Main Memory Management
  • Secondary Storage Management
  • Security Management
  • Command Interpreter System
  • Control over system performance
  • Job Accounting
  • Error Detection and Correction
  • Coordination between other software and users
  • Many more other important tasks

Operating Systems - History

Operating systems have been evolving through the years. In the 1950s, computers were limited to running one program at a time like a calculator, but later in the following decades, computers began to include more and more software programs, sometimes called libraries, that formed the basis for today’s operating systems.

The first Operating System was created by General Motors in 1956 to run a single IBM mainframe computer, its name was the IBM 704. IBM was the first computer manufacturer to develop operating systems and distribute them in its computers in the 1960s.

There are few facts about Operating System evaluation:

  • Stanford Research Institute developed the oN-Line System (NLS) in the late 1960s, which was the first operating system that resembled the desktop operating system we use today.
  • Microsoft bought QDOS (Quick and Dirty Operating System) in 1981 and branded it as Microsoft Operating System (MS-DOS). As of 1994, Microsoft had stopped supporting MS-DOS.
  • Unix was developed in the mid-1960s by the Massachusetts Institute of Technology, AT&T Bell Labs, and General Electric as a joint effort. Initially it was named MULTICS, which stands for Multiplexed Operating and Computing System.
  • FreeBSD is also a popular UNIX derivative, originating from the BSD project at Berkeley. All modern Macintosh computers run a modified version of FreeBSD (OS X).
  • Windows 95 is a consumer-oriented graphical user interface-based operating system built on top of MS-DOS. It was released on August 24, 1995 by Microsoft as part of its Windows 9x family of operating systems.
  • Solaris is a proprietary Unix operating system originally developed by Sun Microsystems in 1991. After the Sun acquisition by Oracle in 2010 it was renamed Oracle Solaris.

Why to Learn Operating System

If you are aspiring to become a Great Computer Programmer then it is highly recommended to understand how exactly an Operating System works inside out. This gives opportunity to understand how exactly data is saved in the disk, how different processes are created and scheduled to run by the CPU, how to interact with different I/O devices and ports.

There are various low level concepts which help a programmer to Design and Develop scalable softwares. Bottom line is without a good understanding of Operating System Concepts, it can't be assumed someone to be a good Computer Application Software developer, and even it is unimaginable imagine someone to become a System Software developer without knowing Operating System in-depth.

If you are a fresher and applying for a job in any standard company like Google, Microsoft, Amazon, IBM etc then it is very much possible that you will be asked questions related to Operating System concepts.

Target Audience

This tutorial has been prepared for the Computer Science Professionals and Students specially for BCA, MCA, B.Tech, M.Tech Engineering Students to help them understand the basic to advanced concepts related to an Operating System in general. Operating System is one of the core concepts in every University teaching Computer Science and this subject has a lot of weight from exams point of view.

Prerequisites

Before you start learning Operating System using this tutorial, we are making an assumption that you are already aware of Computer Fundaments like What is Computer Hardware, CPU, Primary Memory, Secondary Memory, Devices, Files etc. If you are not already aware of these concepts then it will be difficult to understand various concepts related to Operating System and so it is highly recommended to go through our Computer Fundamentals Tutorial before attempting to learn Operating System.

Operating System - Overview

 

An Operating System (OS) is an interface between a computer user and computer hardware. An operating system is a software which performs all the basic tasks like file management, memory management, process management, handling input and output, and controlling peripheral devices such as disk drives and printers.

An operating system is software that enables applications to interact with a computer's hardware. The software that contains the core components of the operating system is called the kernel.

The primary purposes of an Operating System are to enable applications (spftwares) to interact with a computer's hardware and to manage a system's hardware and software resources.

Some popular Operating Systems include Linux Operating System, Windows Operating System, VMS, OS/400, AIX, z/OS, etc. Today, Operating systems is found almost in every device like mobile phones, personal computers, mainframe computers, automobiles, TV, Toys etc.

Definitions

We can have a number of definitions of an Operating System. Let's go through few of them:

An Operting System is the low-level software that supports a computer's basic functions, such as scheduling tasks and controlling peripherals.

We can refine this definition as follows:

An operating system is a program that acts as an interface between the user and the computer hardware and controls the execution of all kinds of programs.

Following is another definition taken from Wikipedia:

An operating system (OS) is system software that manages computer hardware, software resources, and provides common services for computer programs.

Architecture

We can draw a generic architecture diagram of an Operating System which is as follows:

Conceptual view of an Operating System

Operating System Generations

Operating systems have been evolving over the years. We can categorise this evaluation based on different generations which is briefed below:

0th Generation

The term 0th generation is used to refer to the period of development of computing when Charles Babbage invented the Analytical Engine and later John Atanasoff created a computer in 1940. The hardware component technology of this period was electronic vacuum tubes. There was no Operating System available for this generation computer and computer programs were written in machine language. This computers in this generation were inefficient and dependent on the varying competencies of the individual programmer as operators.

First Generation (1951-1956)

The first generation marked the beginning of commercial computing including the introduction of Eckert and Mauchly’s UNIVAC I in early 1951, and a bit later, the IBM 701.

System operation was performed with the help of expert operators and without the benefit of an operating system for a time though programs began to be written in higher level, procedure-oriented languages, and thus the operator’s routine expanded. Later mono-programmed operating system was developed, which eliminated some of the human intervention in running job and provided programmers with a number of desirable functions. These systems still continued to operate under the control of a human operator who used to follow a number of steps to execute a program. Programming language like FORTRAN was developed by John W. Backus in 1956.

Second Generation (1956-1964)

The second generation of computer hardware was most notably characterised by transistors replacing vacuum tubes as the hardware component technology. The first operating system GMOS was developed by the IBM computer. GMOS was based on single stream batch processing system, because it collects all similar jobs in groups or batches and then submits the jobs to the operating system using a punch card to complete all jobs in a machine. Operating system is cleaned after completing one job and then continues to read and initiates the next job in punch card.

Researchers began to experiment with multiprogramming and multiprocessing in their computing services called the time-sharing system. A noteworthy example is the Compatible Time Sharing System (CTSS), developed at MIT during the early 1960s.

Third Generation (1964-1979)

The third generation officially began in April 1964 with IBM’s announcement of its System/360 family of computers. Hardware technology began to use integrated circuits (ICs) which yielded significant advantages in both speed and economy.

Operating system development continued with the introduction and widespread adoption of multiprogramming. The idea of taking fuller advantage of the computer’s data channel I/O capabilities continued to develop.

Another progress which leads to developing of personal computers in fourth generation is a new development of minicomputers with DEC PDP-1. The third generation was an exciting time, indeed, for the development of both computer hardware and the accompanying operating system.

Fourth Generation (1979 – Present)

The fourth generation is characterised by the appearance of the personal computer and the workstation. The component technology of the third generation, was replaced by very large scale integration (VLSI). Many Operating Systems which we are using today like Windows, Linux, MacOS etc developed in the fourth generation.

Following are some of important functions of an operating System.

  • Memory Management
  • Processor Management
  • Device Management
  • File Management
  • Network Management
  • Security
  • Control over system performance
  • Job accounting
  • Error detecting aids
  • Coordination between other software and users

Memory Management

Memory management refers to management of Primary Memory or Main Memory. Main memory is a large array of words or bytes where each word or byte has its own address.

Main memory provides a fast storage that can be accessed directly by the CPU. For a program to be executed, it must in the main memory. An Operating System does the following activities for memory management −

·         Keeps tracks of primary memory, i.e., what part of it are in use by whom, what part are not in use.

·         In multiprogramming, the OS decides which process will get memory when and how much.

·         Allocates the memory when a process requests it to do so.

·         De-allocates the memory when a process no longer needs it or has been terminated.

Processor Management

In multiprogramming environment, the OS decides which process gets the processor when and for how much time. This function is called process scheduling. An Operating System does the following activities for processor management −

·         Keeps tracks of processor and status of process. The program responsible for this task is known as traffic controller.

·         Allocates the processor (CPU) to a process.

·         De-allocates processor when a process is no longer required.

Device Management

An Operating System manages device communication via their respective drivers. It does the following activities for device management −

·         Keeps tracks of all devices. Program responsible for this task is known as the I/O controller.

·         Decides which process gets the device when and for how much time.

·         Allocates the device in the efficient way.

·         De-allocates devices.

File Management

A file system is normally organized into directories for easy navigation and usage. These directories may contain files and other directions.

An Operating System does the following activities for file management −

·         Keeps track of information, location, uses, status etc. The collective facilities are often known as file system.

·         Decides who gets the resources.

·         Allocates the resources.

·         De-allocates the resources.

Other Important Activities

Following are some of the important activities that an Operating System performs −

·         Security − By means of password and similar other techniques, it prevents unauthorized access to programs and data.

·         Control over system performance − Recording delays between request for a service and response from the system.

·         Job accounting − Keeping track of time and resources used by various jobs and users.

·         Error detecting aids − Production of dumps, traces, error messages, and other debugging and error detecting aids.

·         Coordination between other softwares and users − Coordination and assignment of compilers, interpreters, assemblers and other software to the various users of the computer systems.

Components of Operating System

There are various components of an Operating System to perform well defined tasks. Though most of the Operating Systems differ in structure but logically they have similar components. Each component must be a well-defined portion of a system that appropriately describes the functions, inputs, and outputs.

There are following 8-components of an Operating System:

1.     Process Management

2.     I/O Device Management

3.     File Management

4.     Network Management

5.     Main Memory Management

6.     Secondary Storage Management

7.     Security Management

8.     Command Interpreter System

Following section explains all the above components in more detail:

Process Management

A process is program or a fraction of a program that is loaded in main memory. A process needs certain resources including CPU time, Memory, Files, and I/O devices to accomplish its task. The process management component manages the multiple processes running simultaneously on the Operating System.

A program in running state is called a process.

The operating system is responsible for the following activities in connection with process management:

  • Create, load, execute, suspend, resume, and terminate processes.
  • Switch system among multiple processes in main memory.
  • Provides communication mechanisms so that processes can communicate with each others
  • Provides synchronization mechanisms to control concurrent access to shared data to keep shared data consistent.
  • Allocate/de-allocate resources properly to prevent or avoid deadlock situation.

I/O Device Management

One of the purposes of an operating system is to hide the peculiarities of specific hardware devices from the user. I/O Device Management provides an abstract level of H/W devices and keep the details from applications to ensure proper use of devices, to prevent errors, and to provide users with convenient and efficient programming environment.

Following are the tasks of I/O Device Management component:

  • Hide the details of H/W devices
  • Manage main memory for the devices using cache, buffer, and spooling
  • Maintain and provide custom drivers for each device.

File Management

File management is one of the most visible services of an operating system. Computers can store information in several different physical forms; magnetic tape, disk, and drum are the most common forms.

A file is defined as a set of correlated information and it is defined by the creator of the file. Mostly files represent data, source and object forms, and programs. Data files can be of any type like alphabetic, numeric, and alphanumeric.

A files is a sequence of bits, bytes, lines or records whose meaning is defined by its creator and user.

The operating system implements the abstract concept of the file by managing mass storage device, such as types and disks. Also files are normally organized into directories to ease their use. These directories may contain files and other directories and so on.

The operating system is responsible for the following activities in connection with file management:

  • File creation and deletion
  • Directory creation and deletion
  • The support of primitives for manipulating files and directories
  • Mapping files onto secondary storage
  • File backup on stable (nonvolatile) storage media

Network Management

The definition of network management is often broad, as network management involves several different components. Network management is the process of managing and administering a computer network. A computer network is a collection of various types of computers connected with each other.

Network management comprises fault analysis, maintaining the quality of service, provisioning of networks, and performance management.

Network management is the process of keeping your network healthy for an efficient communication between different computers.

Following are the features of network management:

  • Network administration
  • Network maintenance
  • Network operation
  • Network provisioning
  • Network security

Main Memory Management

Memory is a large array of words or bytes, each with its own address. It is a repository of quickly accessible data shared by the CPU and I/O devices.

Main memory is a volatile storage device which means it loses its contents in the case of system failure or as soon as system power goes down.

The main motivation behind Memory Management is to maximize memory utilization on the computer system.

The operating system is responsible for the following activities in connections with memory management:

  • Keep track of which parts of memory are currently being used and by whom.
  • Decide which processes to load when memory space becomes available.
  • Allocate and deallocate memory space as needed.

Secondary Storage Management

The main purpose of a computer system is to execute programs. These programs, together with the data they access, must be in main memory during execution. Since the main memory is too small to permanently accommodate all data and program, the computer system must provide secondary storage to backup main memory.

Most modern computer systems use disks as the principle on-line storage medium, for both programs and data. Most programs, like compilers, assemblers, sort routines, editors, formatters, and so on, are stored on the disk until loaded into memory, and then use the disk as both the source and destination of their processing.

The operating system is responsible for the following activities in connection with disk management:

  • Free space management
  • Storage allocation

Disk scheduling

Security Management

The operating system is primarily responsible for all task and activities happen in the computer system. The various processes in an operating system must be protected from each other’s activities. For that purpose, various mechanisms which can be used to ensure that the files, memory segment, cpu and other resources can be operated on only by those processes that have gained proper authorization from the operating system.

Security Management refers to a mechanism for controlling the access of programs, processes, or users to the resources defined by a computer controls to be imposed, together with some means of enforcement.

For example, memory addressing hardware ensure that a process can only execute within its own address space. The timer ensure that no process can gain control of the CPU without relinquishing it. Finally, no process is allowed to do it’s own I/O, to protect the integrity of the various peripheral devices.

Command Interpreter System

One of the most important component of an operating system is its command interpreter. The command interpreter is the primary interface between the user and the rest of the system.

Command Interpreter System executes a user command by calling one or more number of underlying system programs or system calls.

Command Interpreter System allows human users to interact with the Operating System and provides convenient programming environment to the users.

Many commands are given to the operating system by control statements. A program which reads and interprets control statements is automatically executed. This program is called the shell and few examples are Windows DOS command window, Bash of Unix/Linux or C-Shell of Unix/Linux.

Other Important Activities

An Operating System is a complex Software System. Apart from the above mentioned components and responsibilities, there are many other activities performed by the Operating System. Few of them are listed below:

  • Security − By means of password and similar other techniques, it prevents unauthorized access to programs and data.
  • Control over system performance − Recording delays between request for a service and response from the system.
  • Job accounting − Keeping track of time and resources used by various jobs and users.
  • Error detecting aids − Production of dumps, traces, error messages, and other debugging and error detecting aids.
  • Coordination between other softwares and users − Coordination and assignment of compilers, interpreters, assemblers and other software to the various users of the computer systems.

Types of Operating System

Operating systems are there from the very first computer generation and they keep evolving with time. In this chapter, we will discuss some of the important types of operating systems which are most commonly used.

Batch operating system

The users of a batch operating system do not interact with the computer directly. Each user prepares his job on an off-line device like punch cards and submits it to the computer operator. To speed up processing, jobs with similar needs are batched together and run as a group. The programmers leave their programs with the operator and the operator then sorts the programs with similar requirements into batches.

The problems with Batch Systems are as follows −

·         Lack of interaction between the user and the job.

·         CPU is often idle, because the speed of the mechanical I/O devices is slower than the CPU.

·         Difficult to provide the desired priority.

Time-sharing operating systems

Time-sharing is a technique which enables many people, located at various terminals, to use a particular computer system at the same time. Time-sharing or multitasking is a logical extension of multiprogramming. Processor's time which is shared among multiple users simultaneously is termed as time-sharing.

The main difference between Multiprogrammed Batch Systems and Time-Sharing Systems is that in case of Multiprogrammed batch systems, the objective is to maximize processor use, whereas in Time-Sharing Systems, the objective is to minimize response time.

Multiple jobs are executed by the CPU by switching between them, but the switches occur so frequently. Thus, the user can receive an immediate response. For example, in a transaction processing, the processor executes each user program in a short burst or quantum of computation. That is, if n users are present, then each user can get a time quantum. When the user submits the command, the response time is in few seconds at most.

The operating system uses CPU scheduling and multiprogramming to provide each user with a small portion of a time. Computer systems that were designed primarily as batch systems have been modified to time-sharing systems.

Advantages of Timesharing operating systems are as follows −

·         Provides the advantage of quick response.

·         Avoids duplication of software.

·         Reduces CPU idle time.

Disadvantages of Time-sharing operating systems are as follows −

·         Problem of reliability.

·         Question of security and integrity of user programs and data.

·         Problem of data communication.

Distributed operating System

Distributed systems use multiple central processors to serve multiple real-time applications and multiple users. Data processing jobs are distributed among the processors accordingly.

The processors communicate with one another through various communication lines (such as high-speed buses or telephone lines). These are referred as loosely coupled systems or distributed systems. Processors in a distributed system may vary in size and function. These processors are referred as sites, nodes, computers, and so on.

The advantages of distributed systems are as follows −

·         With resource sharing facility, a user at one site may be able to use the resources available at another.

·         Speedup the exchange of data with one another via electronic mail.

·         If one site fails in a distributed system, the remaining sites can potentially continue operating.

·         Better service to the customers.

·         Reduction of the load on the host computer.

·         Reduction of delays in data processing.

Network operating System

A Network Operating System runs on a server and provides the server the capability to manage data, users, groups, security, applications, and other networking functions. The primary purpose of the network operating system is to allow shared file and printer access among multiple computers in a network, typically a local area network (LAN), a private network or to other networks.

Examples of network operating systems include Microsoft Windows Server 2003, Microsoft Windows Server 2008, UNIX, Linux, Mac OS X, Novell NetWare, and BSD.

The advantages of network operating systems are as follows −

·         Centralized servers are highly stable.

·         Security is server managed.

·         Upgrades to new technologies and hardware can be easily integrated into the system.

·         Remote access to servers is possible from different locations and types of systems.

The disadvantages of network operating systems are as follows −

·         High cost of buying and running a server.

·         Dependency on a central location for most operations.

·         Regular maintenance and updates are required.

Real Time operating System

A real-time system is defined as a data processing system in which the time interval required to process and respond to inputs is so small that it controls the environment. The time taken by the system to respond to an input and display of required updated information is termed as the response time. So in this method, the response time is very less as compared to online processing.

Real-time systems are used when there are rigid time requirements on the operation of a processor or the flow of data and real-time systems can be used as a control device in a dedicated application. A real-time operating system must have well-defined, fixed time constraints, otherwise the system will fail. For example, Scientific experiments, medical imaging systems, industrial control systems, weapon systems, robots, air traffic control systems, etc.

There are two types of real-time operating systems.

Hard real-time systems

Hard real-time systems guarantee that critical tasks complete on time. In hard real-time systems, secondary storage is limited or missing and the data is stored in ROM. In these systems, virtual memory is almost never found.

Soft real-time systems

Soft real-time systems are less restrictive. A critical real-time task gets priority over other tasks and retains the priority until it completes. Soft real-time systems have limited utility than hard real-time systems. For example, multimedia, virtual reality, Advanced Scientific Projects like undersea exploration and planetary rovers, etc.

Operating System - Services

An Operating System provides services to both the users and to the programs.

·         It provides programs an environment to execute.

·         It provides users the services to execute the programs in a convenient manner.

Following are a few common services provided by an operating system −

·         Program execution

·         I/O operations

·         File System manipulation

·         Communication

·         Error Detection

·         Resource Allocation

·         Protection

Program execution

Operating systems handle many kinds of activities from user programs to system programs like printer spooler, name servers, file server, etc. Each of these activities is encapsulated as a process.

A process includes the complete execution context (code to execute, data to manipulate, registers, OS resources in use). Following are the major activities of an operating system with respect to program management −

·         Loads a program into memory.

·         Executes the program.

·         Handles program's execution.

·         Provides a mechanism for process synchronization.

·         Provides a mechanism for process communication.

·         Provides a mechanism for deadlock handling.

I/O Operation

An I/O subsystem comprises of I/O devices and their corresponding driver software. Drivers hide the peculiarities of specific hardware devices from the users.

An Operating System manages the communication between user and device drivers.

·         I/O operation means read or write operation with any file or any specific I/O device.

·         Operating system provides the access to the required I/O device when required.

File system manipulation

A file represents a collection of related information. Computers can store files on the disk (secondary storage), for long-term storage purpose. Examples of storage media include magnetic tape, magnetic disk and optical disk drives like CD, DVD. Each of these media has its own properties like speed, capacity, data transfer rate and data access methods.

A file system is normally organized into directories for easy navigation and usage. These directories may contain files and other directions. Following are the major activities of an operating system with respect to file management −

·         Program needs to read a file or write a file.

·         The operating system gives the permission to the program for operation on file.

·         Permission varies from read-only, read-write, denied and so on.

·         Operating System provides an interface to the user to create/delete files.

·         Operating System provides an interface to the user to create/delete directories.

·         Operating System provides an interface to create the backup of file system.

Communication

In case of distributed systems which are a collection of processors that do not share memory, peripheral devices, or a clock, the operating system manages communications between all the processes. Multiple processes communicate with one another through communication lines in the network.

The OS handles routing and connection strategies, and the problems of contention and security. Following are the major activities of an operating system with respect to communication −

·         Two processes often require data to be transferred between them

·         Both the processes can be on one computer or on different computers, but are connected through a computer network.

·         Communication may be implemented by two methods, either by Shared Memory or by Message Passing.

Error handling

Errors can occur anytime and anywhere. An error may occur in CPU, in I/O devices or in the memory hardware. Following are the major activities of an operating system with respect to error handling −

·         The OS constantly checks for possible errors.

·         The OS takes an appropriate action to ensure correct and consistent computing.

Resource Management

In case of multi-user or multi-tasking environment, resources such as main memory, CPU cycles and files storage are to be allocated to each user or job. Following are the major activities of an operating system with respect to resource management −

·         The OS manages all kinds of resources using schedulers.

·         CPU scheduling algorithms are used for better utilization of CPU.

Protection

Considering a computer system having multiple users and concurrent execution of multiple processes, the various processes must be protected from each other's activities.

Protection refers to a mechanism or a way to control the access of programs, processes, or users to the resources defined by a computer system. Following are the major activities of an operating system with respect to protection −

·         The OS ensures that all access to system resources is controlled.

·         The OS ensures that external I/O devices are protected from invalid access attempts.

·         The OS provides authentication features for each user by means of passwords.

Operating System - Properties

Following are the different properties of an Operating System. This tutorial will explain these properties in detail one by one:

1.     Batch processing

2.     Multitasking

3.     Multiprogramming

4.     Interactivity

5.     Real Time System

6.     Distributed Environment

7.     Spooling

Batch processing

Batch processing is a technique in which an Operating System collects the programs and data together in a batch before processing starts. An operating system does the following activities related to batch processing −

·         The OS defines a job which has predefined sequence of commands, programs and data as a single unit.

·         The OS keeps a number a jobs in memory and executes them without any manual information.

·         Jobs are processed in the order of submission, i.e., first come first served fashion.

·         When a job completes its execution, its memory is released and the output for the job gets copied into an output spool for later printing or processing.

Batch Processing

Advantages

·         Batch processing takes much of the work of the operator to the computer.

·         Increased performance as a new job get started as soon as the previous job is finished, without any manual intervention.

Disadvantages

  • Difficult to debug program.
  • A job could enter an infinite loop.
  • Due to lack of protection scheme, one batch job can affect pending jobs.

Multitasking

Multitasking is when multiple jobs are executed by the CPU simultaneously by switching between them. Switches occur so frequently that the users may interact with each program while it is running. An OS does the following activities related to multitasking −

·         The user gives instructions to the operating system or to a program directly, and receives an immediate response.

·         The OS handles multitasking in the way that it can handle multiple operations/executes multiple programs at a time.

·         Multitasking Operating Systems are also known as Time-sharing systems.

·         These Operating Systems were developed to provide interactive use of a computer system at a reasonable cost.

·         A time-shared operating system uses the concept of CPU scheduling and multiprogramming to provide each user with a small portion of a time-shared CPU.

·         Each user has at least one separate program in memory.

Multitasking

·         A program that is loaded into memory and is executing is commonly referred to as a process.

·         When a process executes, it typically executes for only a very short time before it either finishes or needs to perform I/O.

·         Since interactive I/O typically runs at slower speeds, it may take a long time to complete. During this time, a CPU can be utilized by another process.

·         The operating system allows the users to share the computer simultaneously. Since each action or command in a time-shared system tends to be short, only a little CPU time is needed for each user.

·         As the system switches CPU rapidly from one user/program to the next, each user is given the impression that he/she has his/her own CPU, whereas actually one CPU is being shared among many users.

Multiprogramming

Sharing the processor, when two or more programs reside in memory at the same time, is referred as multiprogramming. Multiprogramming assumes a single shared processor. Multiprogramming increases CPU utilization by organizing jobs so that the CPU always has one to execute.

The following figure shows the memory layout for a multiprogramming system.

Memory layout

An OS does the following activities related to multiprogramming.

·         The operating system keeps several jobs in memory at a time.

·         This set of jobs is a subset of the jobs kept in the job pool.

·         The operating system picks and begins to execute one of the jobs in the memory.

·         Multiprogramming operating systems monitor the state of all active programs and system resources using memory management programs to ensures that the CPU is never idle, unless there are no jobs to process.

Advantages

  • High and efficient CPU utilization.
  • User feels that many programs are allotted CPU almost simultaneously.

Disadvantages

  • CPU scheduling is required.
  • To accommodate many jobs in memory, memory management is required.

Interactivity

Interactivity refers to the ability of users to interact with a computer system. An Operating system does the following activities related to interactivity −

  • Provides the user an interface to interact with the system.
  • Manages input devices to take inputs from the user. For example, keyboard.
  • Manages output devices to show outputs to the user. For example, Monitor.

The response time of the OS needs to be short, since the user submits and waits for the result.

Real Time System

Real-time systems are usually dedicated, embedded systems. An operating system does the following activities related to real-time system activity.

  • In such systems, Operating Systems typically read from and react to sensor data.
  • The Operating system must guarantee response to events within fixed periods of time to ensure correct performance.

Distributed Environment

A distributed environment refers to multiple independent CPUs or processors in a computer system. An operating system does the following activities related to distributed environment −

·         The OS distributes computation logics among several physical processors.

·         The processors do not share memory or a clock. Instead, each processor has its own local memory.

·         The OS manages the communications between the processors. They communicate with each other through various communication lines.

Spooling

Spooling is an acronym for simultaneous peripheral operations on line. Spooling refers to putting data of various I/O jobs in a buffer. This buffer is a special area in memory or hard disk which is accessible to I/O devices.

An operating system does the following activities related to distributed environment −

·         Handles I/O device data spooling as devices have different data access rates.

·         Maintains the spooling buffer which provides a waiting station where data can rest while the slower device catches up.

·         Maintains parallel computation because of spooling process as a computer can perform I/O in parallel fashion. It becomes possible to have the computer read data from a tape, write data to disk and to write out to a tape printer while it is doing its computing task.

Spooling

Advantages

  • The spooling operation uses a disk as a very large buffer.
  • Spooling is capable of overlapping I/O operation for one job with processor operations for another job.

Operating System - Processes

Process

A process is basically a program in execution. The execution of a process must progress in a sequential fashion.

A process is defined as an entity which represents the basic unit of work to be implemented in the system.

To put it in simple terms, we write our computer programs in a text file and when we execute this program, it becomes a process which performs all the tasks mentioned in the program.

When a program is loaded into the memory and it becomes a process, it can be divided into four sections ─ stack, heap, text and data. The following image shows a simplified layout of a process inside main memory −

Process Components

S.N.

Component & Description

1

Stack

The process Stack contains the temporary data such as method/function parameters, return address and local variables.

2

Heap

This is dynamically allocated memory to a process during its run time.

3

Text

This includes the current activity represented by the value of Program Counter and the contents of the processor's registers.

4

Data

This section contains the global and static variables.

Program

A program is a piece of code which may be a single line or millions of lines. A computer program is usually written by a computer programmer in a programming language. For example, here is a simple program written in C programming language −

#include <stdio.h>

 

int main() {

   printf("Hello, World! \n");

   return 0;

}

A computer program is a collection of instructions that performs a specific task when executed by a computer. When we compare a program with a process, we can conclude that a process is a dynamic instance of a computer program.

A part of a computer program that performs a well-defined task is known as an algorithm. A collection of computer programs, libraries and related data are referred to as a software.

Process Life Cycle

When a process executes, it passes through different states. These stages may differ in different operating systems, and the names of these states are also not standardized.

In general, a process can have one of the following five states at a time.

S.N.

State & Description

1

Start

This is the initial state when a process is first started/created.

2

Ready

The process is waiting to be assigned to a processor. Ready processes are waiting to have the processor allocated to them by the operating system so that they can run. Process may come into this state after Start state or while running it by but interrupted by the scheduler to assign CPU to some other process.

3

Running

Once the process has been assigned to a processor by the OS scheduler, the process state is set to running and the processor executes its instructions.

4

Waiting

Process moves into the waiting state if it needs to wait for a resource, such as waiting for user input, or waiting for a file to become available.

5

Terminated or Exit

Once the process finishes its execution, or it is terminated by the operating system, it is moved to the terminated state where it waits to be removed from main memory.

Process States

Process Control Block (PCB)

A Process Control Block is a data structure maintained by the Operating System for every process. The PCB is identified by an integer process ID (PID). A PCB keeps all the information needed to keep track of a process as listed below in the table −

S.N.

Information & Description

1

Process State

The current state of the process i.e., whether it is ready, running, waiting, or whatever.

2

Process privileges

This is required to allow/disallow access to system resources.

3

Process ID

Unique identification for each of the process in the operating system.

4

Pointer

A pointer to parent process.

5

Program Counter

Program Counter is a pointer to the address of the next instruction to be executed for this process.

6

CPU registers

Various CPU registers where process need to be stored for execution for running state.

7

CPU Scheduling Information

Process priority and other scheduling information which is required to schedule the process.

8

Memory management information

This includes the information of page table, memory limits, Segment table depending on memory used by the operating system.

9

Accounting information

This includes the amount of CPU used for process execution, time limits, execution ID etc.

10

IO status information

This includes a list of I/O devices allocated to the process.

The architecture of a PCB is completely dependent on Operating System and may contain different information in different operating systems. Here is a simplified diagram of a PCB −

Process Control Block

The PCB is maintained for a process throughout its lifetime, and is deleted once the process terminates.

Operating System - Process Scheduling

Definition

The process scheduling is the activity of the process manager that handles the removal of the running process from the CPU and the selection of another process on the basis of a particular strategy.

Process scheduling is an essential part of a Multiprogramming operating systems. Such operating systems allow more than one process to be loaded into the executable memory at a time and the loaded process shares the CPU using time multiplexing.

Categories of Scheduling

There are two categories of scheduling:

1.     Non-preemptive: Here the resource can’t be taken from a process until the process completes execution. The switching of resources occurs when the running process terminates and moves to a waiting state.

2.     Preemptive: Here the OS allocates the resources to a process for a fixed amount of time. During resource allocation, the process switches from running state to ready state or from waiting state to ready state. This switching occurs as the CPU may give priority to other processes and replace the process with higher priority with the running process.

Process Scheduling Queues

The OS maintains all Process Control Blocks (PCBs) in Process Scheduling Queues. The OS maintains a separate queue for each of the process states and PCBs of all processes in the same execution state are placed in the same queue. When the state of a process is changed, its PCB is unlinked from its current queue and moved to its new state queue.

The Operating System maintains the following important process scheduling queues −

  • Job queue − This queue keeps all the processes in the system.
  • Ready queue − This queue keeps a set of all processes residing in main memory, ready and waiting to execute. A new process is always put in this queue.
  • Device queues − The processes which are blocked due to unavailability of an I/O device constitute this queue.

Process Scheduling Queuing

The OS can use different policies to manage each queue (FIFO, Round Robin, Priority, etc.). The OS scheduler determines how to move processes between the ready and run queues which can only have one entry per processor core on the system; in the above diagram, it has been merged with the CPU.

Two-State Process Model

Two-state process model refers to running and non-running states which are described below −

S.N.

State & Description

1

Running

When a new process is created, it enters into the system as in the running state.

2

Not Running

Processes that are not running are kept in queue, waiting for their turn to execute. Each entry in the queue is a pointer to a particular process. Queue is implemented by using linked list. Use of dispatcher is as follows. When a process is interrupted, that process is transferred in the waiting queue. If the process has completed or aborted, the process is discarded. In either case, the dispatcher then selects a process from the queue to execute.

Schedulers

Schedulers are special system software which handle process scheduling in various ways. Their main task is to select the jobs to be submitted into the system and to decide which process to run. Schedulers are of three types −

  • Long-Term Scheduler
  • Short-Term Scheduler
  • Medium-Term Scheduler

Long Term Scheduler

It is also called a job scheduler. A long-term scheduler determines which programs are admitted to the system for processing. It selects processes from the queue and loads them into memory for execution. Process loads into the memory for CPU scheduling.

The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O bound and processor bound. It also controls the degree of multiprogramming. If the degree of multiprogramming is stable, then the average rate of process creation must be equal to the average departure rate of processes leaving the system.

On some systems, the long-term scheduler may not be available or minimal. Time-sharing operating systems have no long term scheduler. When a process changes the state from new to ready, then there is use of long-term scheduler.

Short Term Scheduler

It is also called as CPU scheduler. Its main objective is to increase system performance in accordance with the chosen set of criteria. It is the change of ready state to running state of the process. CPU scheduler selects a process among the processes that are ready to execute and allocates CPU to one of them.

Short-term schedulers, also known as dispatchers, make the decision of which process to execute next. Short-term schedulers are faster than long-term schedulers.

Medium Term Scheduler

Medium-term scheduling is a part of swapping. It removes the processes from the memory. It reduces the degree of multiprogramming. The medium-term scheduler is in-charge of handling the swapped out-processes.

A running process may become suspended if it makes an I/O request. A suspended processes cannot make any progress towards completion. In this condition, to remove the process from memory and make space for other processes, the suspended process is moved to the secondary storage. This process is called swapping, and the process is said to be swapped out or rolled out. Swapping may be necessary to improve the process mix.

Comparison among Scheduler

S.N.

Long-Term Scheduler

Short-Term Scheduler

Medium-Term Scheduler

1

It is a job scheduler

It is a CPU scheduler

It is a process swapping scheduler.

2

Speed is lesser than short term scheduler

Speed is fastest among other two

Speed is in between both short and long term scheduler.

3

It controls the degree of multiprogramming

It provides lesser control over degree of multiprogramming

It reduces the degree of multiprogramming.

4

It is almost absent or minimal in time sharing system

It is also minimal in time sharing system

It is a part of Time sharing systems.

5

It selects processes from pool and loads them into memory for execution

It selects those processes which are ready to execute

It can re-introduce the process into memory and execution can be continued.

Context Switching

A context switching is the mechanism to store and restore the state or context of a CPU in Process Control block so that a process execution can be resumed from the same point at a later time. Using this technique, a context switcher enables multiple processes to share a single CPU. Context switching is an essential part of a multitasking operating system features.

When the scheduler switches the CPU from executing one process to execute another, the state from the current running process is stored into the process control block. After this, the state for the process to run next is loaded from its own PCB and used to set the PC, registers, etc. At that point, the second process can start executing.

Process Context Switch

Context switches are computationally intensive since register and memory state must be saved and restored. To avoid the amount of context switching time, some hardware systems employ two or more sets of processor registers. When the process is switched, the following information is stored for later use.

  • Program Counter
  • Scheduling information
  • Base and limit register value
  • Currently used register
  • Changed State
  • I/O State information
  • Accounting information

Operating System Scheduling algorithms

A Process Scheduler schedules different processes to be assigned to the CPU based on particular scheduling algorithms. There are six popular process scheduling algorithms which we are going to discuss in this chapter −

·         First-Come, First-Served (FCFS) Scheduling

·         Shortest-Job-Next (SJN) Scheduling

·         Priority Scheduling

·         Shortest Remaining Time

·         Round Robin(RR) Scheduling

·         Multiple-Level Queues Scheduling

These algorithms are either non-preemptive or preemptive. Non-preemptive algorithms are designed so that once a process enters the running state, it cannot be preempted until it completes its allotted time, whereas the preemptive scheduling is based on priority where a scheduler may preempt a low priority running process anytime when a high priority process enters into a ready state.

First Come First Serve (FCFS)

·         Jobs are executed on first come, first serve basis.

·         It is a non-preemptive, pre-emptive scheduling algorithm.

·         Easy to understand and implement.

·         Its implementation is based on FIFO queue.

·         Poor in performance as average wait time is high.

First Come First Serve Scheduling Algorithm

Wait time of each process is as follows −

Process

Wait Time : Service Time - Arrival Time

P0

0 - 0 = 0

P1

5 - 1 = 4

P2

8 - 2 = 6

P3

16 - 3 = 13

Average Wait Time: (0+4+6+13) / 4 = 5.75

Shortest Job Next (SJN)

·         This is also known as shortest job first, or SJF

·         This is a non-preemptive, pre-emptive scheduling algorithm.

·         Best approach to minimize waiting time.

·         Easy to implement in Batch systems where required CPU time is known in advance.

·         Impossible to implement in interactive systems where required CPU time is not known.

·         The processer should know in advance how much time process will take.

Given: Table of processes, and their Arrival time, Execution time

Process

Arrival Time

Execution Time

Service Time

P0

0

5

0

P1

1

3

5

P2

2

8

14

P3

3

6

8

Shortest Job First Scheduling Algorithm

Waiting time of each process is as follows −

Process

Waiting Time

P0

0 - 0 = 0

P1

5 - 1 = 4

P2

14 - 2 = 12

P3

8 - 3 = 5

Average Wait Time: (0 + 4 + 12 + 5)/4 = 21 / 4 = 5.25

Priority Based Scheduling

·         Priority scheduling is a non-preemptive algorithm and one of the most common scheduling algorithms in batch systems.

·         Each process is assigned a priority. Process with highest priority is to be executed first and so on.

·         Processes with same priority are executed on first come first served basis.

·         Priority can be decided based on memory requirements, time requirements or any other resource requirement.

Given: Table of processes, and their Arrival time, Execution time, and priority. Here we are considering 1 is the lowest priority.

Process

Arrival Time

Execution Time

Priority

Service Time

P0

0

5

1

0

P1

1

3

2

11

P2

2

8

1

14

P3

3

6

3

5

Priority Scheduling Algorithm

Waiting time of each process is as follows −

Process

Waiting Time

P0

0 - 0 = 0

P1

11 - 1 = 10

P2

14 - 2 = 12

P3

5 - 3 = 2

Average Wait Time: (0 + 10 + 12 + 2)/4 = 24 / 4 = 6

Shortest Remaining Time

·         Shortest remaining time (SRT) is the preemptive version of the SJN algorithm.

·         The processor is allocated to the job closest to completion but it can be preempted by a newer ready job with shorter time to completion.

·         Impossible to implement in interactive systems where required CPU time is not known.

·         It is often used in batch environments where short jobs need to give preference.

Round Robin Scheduling

·         Round Robin is the preemptive process scheduling algorithm.

·         Each process is provided a fix time to execute, it is called a quantum.

·         Once a process is executed for a given time period, it is preempted and other process executes for a given time period.

·         Context switching is used to save states of preempted processes.

Round Robin Scheduling Algorithm

Wait time of each process is as follows −

Process

Wait Time : Service Time - Arrival Time

P0

(0 - 0) + (12 - 3) = 9

P1

(3 - 1) = 2

P2

(6 - 2) + (14 - 9) + (20 - 17) = 12

P3

(9 - 3) + (17 - 12) = 11

Average Wait Time: (9+2+12+11) / 4 = 8.5

Multiple-Level Queues Scheduling

Multiple-level queues are not an independent scheduling algorithm. They make use of other existing algorithms to group and schedule jobs with common characteristics.

·         Multiple queues are maintained for processes with common characteristics.

·         Each queue can have its own scheduling algorithms.

·         Priorities are assigned to each queue.

For example, CPU-bound jobs can be scheduled in one queue and all I/O-bound jobs in another queue. The Process Scheduler then alternately selects jobs from each queue and assigns them to the CPU based on the algorithm assigned to the queue.

Operating System - Multi-Threading

What is Thread?

S.N.

Process

Thread

1

Process is heavy weight or resource intensive.

Thread is light weight, taking lesser resources than a process.

2

Process switching needs interaction with operating system.

Thread switching does not need to interact with operating system.

3

In multiple processing environments, each process executes the same code but has its own memory and file resources.

All threads can share same set of open files, child processes.

4

If one process is blocked, then no other process can execute until the first process is unblocked.

While one thread is blocked and waiting, a second thread in the same task can run.

5

Multiple processes without using threads use more resources.

Multiple threaded processes use fewer resources.

6

In multiple processes each process operates independently of the others.

One thread can read, write or change another thread's data.

A thread is a flow of execution through the process code, with its own program counter that keeps track of which instruction to execute next, system registers which hold its current working variables, and a stack which contains the execution history.

A thread shares with its peer threads few information like code segment, data segment and open files. When one thread alters a code segment memory item, all other threads see that.

A thread is also called a lightweight process. Threads provide a way to improve application performance through parallelism. Threads represent a software approach to improving performance of operating system by reducing the overhead thread is equivalent to a classical process.

Each thread belongs to exactly one process and no thread can exist outside a process. Each thread represents a separate flow of control. Threads have been successfully used in implementing network servers and web server. They also provide a suitable foundation for parallel execution of applications on shared memory multiprocessors. The following figure shows the working of a single-threaded and a multithreaded process.

Single vs Multithreaded Process

Difference between Process and Thread

Advantages of Thread

·         Threads minimize the context switching time.

·         Use of threads provides concurrency within a process.

·         Efficient communication.

·         It is more economical to create and context switch threads.

·         Threads allow utilization of multiprocessor architectures to a greater scale and efficiency.

Types of Thread

Threads are implemented in following two ways −

·         User Level Threads − User managed threads.

·         Kernel Level Threads − Operating System managed threads acting on kernel, an operating system core.

User Level Threads

In this case, the thread management kernel is not aware of the existence of threads. The thread library contains code for creating and destroying threads, for passing message and data between threads, for scheduling thread execution and for saving and restoring thread contexts. The application starts with a single thread.

User level thread

Advantages

·         Thread switching does not require Kernel mode privileges.

·         User level thread can run on any operating system.

·         Scheduling can be application specific in the user level thread.

·         User level threads are fast to create and manage.

Disadvantages

·         In a typical operating system, most system calls are blocking.

·         Multithreaded application cannot take advantage of multiprocessing.

Kernel Level Threads

In this case, thread management is done by the Kernel. There is no thread management code in the application area. Kernel threads are supported directly by the operating system. Any application can be programmed to be multithreaded. All of the threads within an application are supported within a single process.

The Kernel maintains context information for the process as a whole and for individuals threads within the process. Scheduling by the Kernel is done on a thread basis. The Kernel performs thread creation, scheduling and management in Kernel space. Kernel threads are generally slower to create and manage than the user threads.

Advantages

·         Kernel can simultaneously schedule multiple threads from the same process on multiple processes.

·         If one thread in a process is blocked, the Kernel can schedule another thread of the same process.

·         Kernel routines themselves can be multithreaded.

Disadvantages

·         Kernel threads are generally slower to create and manage than the user threads.

·         Transfer of control from one thread to another within the same process requires a mode switch to the Kernel.

Multithreading Models

Some operating system provide a combined user level thread and Kernel level thread facility. Solaris is a good example of this combined approach. In a combined system, multiple threads within the same application can run in parallel on multiple processors and a blocking system call need not block the entire process. Multithreading models are three types

·         Many to many relationship.

·         Many to one relationship.

·         One to one relationship.

Many to Many Model

The many-to-many model multiplexes any number of user threads onto an equal or smaller number of kernel threads.

The following diagram shows the many-to-many threading model where 6 user level threads are multiplexing with 6 kernel level threads. In this model, developers can create as many user threads as necessary and the corresponding Kernel threads can run in parallel on a multiprocessor machine. This model provides the best accuracy on concurrency and when a thread performs a blocking system call, the kernel can schedule another thread for execution.

Many to many thread model

Many to One Model

Many-to-one model maps many user level threads to one Kernel-level thread. Thread management is done in user space by the thread library. When thread makes a blocking system call, the entire process will be blocked. Only one thread can access the Kernel at a time, so multiple threads are unable to run in parallel on multiprocessors.

If the user-level thread libraries are implemented in the operating system in such a way that the system does not support them, then the Kernel threads use the many-to-one relationship modes.

Many to one thread model

One to One Model

There is one-to-one relationship of user-level thread to the kernel-level thread. This model provides more concurrency than the many-to-one model. It also allows another thread to run when a thread makes a blocking system call. It supports multiple threads to execute in parallel on microprocessors.

Disadvantage of this model is that creating user thread requires the corresponding Kernel thread. OS/2, windows NT and windows 2000 use one to one relationship model.

One to one thread model

Difference between User-Level & Kernel-Level Thread

S.N.

User-Level Threads

Kernel-Level Thread

1

User-level threads are faster to create and manage.

Kernel-level threads are slower to create and manage.

2

Implementation is by a thread library at the user level.

Operating system supports creation of Kernel threads.

3

User-level thread is generic and can run on any operating system.

Kernel-level thread is specific to the operating system.

4

Multi-threaded applications cannot take advantage of multiprocessing.

Kernel routines themselves can be multithreaded.

Operating System - Memory Management

Memory management is the functionality of an operating system which handles or manages primary memory and moves processes back and forth between main memory and disk during execution. Memory management keeps track of each and every memory location, regardless of either it is allocated to some process or it is free. It checks how much memory is to be allocated to processes. It decides which process will get memory at what time. It tracks whenever some memory gets freed or unallocated and correspondingly it updates the status.

This tutorial will teach you basic concepts related to Memory Management.

Process Address Space

The process address space is the set of logical addresses that a process references in its code. For example, when 32-bit addressing is in use, addresses can range from 0 to 0x7fffffff; that is, 2^31 possible numbers, for a total theoretical size of 2 gigabytes.

The operating system takes care of mapping the logical addresses to physical addresses at the time of memory allocation to the program. There are three types of addresses used in a program before and after memory is allocated −

S.N.

Memory Addresses & Description

1

Symbolic addresses

The addresses used in a source code. The variable names, constants, and instruction labels are the basic elements of the symbolic address space.

2

Relative addresses

At the time of compilation, a compiler converts symbolic addresses into relative addresses.

3

Physical addresses

The loader generates these addresses at the time when a program is loaded into main memory.

Virtual and physical addresses are the same in compile-time and load-time address-binding schemes. Virtual and physical addresses differ in execution-time address-binding scheme.

The set of all logical addresses generated by a program is referred to as a logical address space. The set of all physical addresses corresponding to these logical addresses is referred to as a physical address space.

The runtime mapping from virtual to physical address is done by the memory management unit (MMU) which is a hardware device. MMU uses following mechanism to convert virtual address to physical address.

·         The value in the base register is added to every address generated by a user process, which is treated as offset at the time it is sent to memory. For example, if the base register value is 10000, then an attempt by the user to use address location 100 will be dynamically reallocated to location 10100.

·         The user program deals with virtual addresses; it never sees the real physical addresses.

Static vs Dynamic Loading

The choice between Static or Dynamic Loading is to be made at the time of computer program being developed. If you have to load your program statically, then at the time of compilation, the complete programs will be compiled and linked without leaving any external program or module dependency. The linker combines the object program with other necessary object modules into an absolute program, which also includes logical addresses.

If you are writing a Dynamically loaded program, then your compiler will compile the program and for all the modules which you want to include dynamically, only references will be provided and rest of the work will be done at the time of execution.

At the time of loading, with static loading, the absolute program (and data) is loaded into memory in order for execution to start.

If you are using dynamic loading, dynamic routines of the library are stored on a disk in relocatable form and are loaded into memory only when they are needed by the program.

Static vs Dynamic Linking

As explained above, when static linking is used, the linker combines all other modules needed by a program into a single executable program to avoid any runtime dependency.

When dynamic linking is used, it is not required to link the actual module or library with the program, rather a reference to the dynamic module is provided at the time of compilation and linking. Dynamic Link Libraries (DLL) in Windows and Shared Objects in Unix are good examples of dynamic libraries.

Swapping

Swapping is a mechanism in which a process can be swapped temporarily out of main memory (or move) to secondary storage (disk) and make that memory available to other processes. At some later time, the system swaps back the process from the secondary storage to main memory.

Though performance is usually affected by swapping process but it helps in running multiple and big processes in parallel and that's the reason Swapping is also known as a technique for memory compaction.

Process Swapping

The total time taken by swapping process includes the time it takes to move the entire process to a secondary disk and then to copy the process back to memory, as well as the time the process takes to regain main memory.

Let us assume that the user process is of size 2048KB and on a standard hard disk where swapping will take place has a data transfer rate around 1 MB per second. The actual transfer of the 1000K process to or from memory will take

2048KB / 1024KB per second
= 2 seconds
= 2000 milliseconds

Now considering in and out time, it will take complete 4000 milliseconds plus other overhead where the process competes to regain main memory.

Memory Allocation

Main memory usually has two partitions −

·         Low Memory − Operating system resides in this memory.

·         High Memory − User processes are held in high memory.

Operating system uses the following memory allocation mechanism.

S.N.

Memory Allocation & Description

1

Single-partition allocation

In this type of allocation, relocation-register scheme is used to protect user processes from each other, and from changing operating-system code and data. Relocation register contains value of smallest physical address whereas limit register contains range of logical addresses. Each logical address must be less than the limit register.

2

Multiple-partition allocation

In this type of allocation, main memory is divided into a number of fixed-sized partitions where each partition should contain only one process. When a partition is free, a process is selected from the input queue and is loaded into the free partition. When the process terminates, the partition becomes available for another process.

Fragmentation

As processes are loaded and removed from memory, the free memory space is broken into little pieces. It happens after sometimes that processes cannot be allocated to memory blocks considering their small size and memory blocks remains unused. This problem is known as Fragmentation.

Fragmentation is of two types −

S.N.

Fragmentation & Description

1

External fragmentation

Total memory space is enough to satisfy a request or to reside a process in it, but it is not contiguous, so it cannot be used.

2

Internal fragmentation

Memory block assigned to process is bigger. Some portion of memory is left unused, as it cannot be used by another process.

The following diagram shows how fragmentation can cause waste of memory and a compaction technique can be used to create more free memory out of fragmented memory −

Memory Fragmentation

External fragmentation can be reduced by compaction or shuffle memory contents to place all free memory together in one large block. To make compaction feasible, relocation should be dynamic.

The internal fragmentation can be reduced by effectively assigning the smallest partition but large enough for the process.

Paging

A computer can address more memory than the amount physically installed on the system. This extra memory is actually called virtual memory and it is a section of a hard that's set up to emulate the computer's RAM. Paging technique plays an important role in implementing virtual memory.

Paging is a memory management technique in which process address space is broken into blocks of the same size called pages (size is power of 2, between 512 bytes and 8192 bytes). The size of the process is measured in the number of pages.

Similarly, main memory is divided into small fixed-sized blocks of (physical) memory called frames and the size of a frame is kept the same as that of a page to have optimum utilization of the main memory and to avoid external fragmentation.

Paging

Address Translation

Page address is called logical address and represented by page number and the offset.

Logical Address = Page number + page offset

Frame address is called physical address and represented by a frame number and the offset.

Physical Address = Frame number + page offset

A data structure called page map table is used to keep track of the relation between a page of a process to a frame in physical memory.

Page Map Table

When the system allocates a frame to any page, it translates this logical address into a physical address and create entry into the page table to be used throughout execution of the program.

When a process is to be executed, its corresponding pages are loaded into any available memory frames. Suppose you have a program of 8Kb but your memory can accommodate only 5Kb at a given point in time, then the paging concept will come into picture. When a computer runs out of RAM, the operating system (OS) will move idle or unwanted pages of memory to secondary memory to free up RAM for other processes and brings them back when needed by the program.

This process continues during the whole execution of the program where the OS keeps removing idle pages from the main memory and write them onto the secondary memory and bring them back when required by the program.

Advantages and Disadvantages of Paging

Here is a list of advantages and disadvantages of paging −

·         Paging reduces external fragmentation, but still suffer from internal fragmentation.

·         Paging is simple to implement and assumed as an efficient memory management technique.

·         Due to equal size of the pages and frames, swapping becomes very easy.

·         Page table requires extra memory space, so may not be good for a system having small RAM.

Segmentation

Segmentation is a memory management technique in which each job is divided into several segments of different sizes, one for each module that contains pieces that perform related functions. Each segment is actually a different logical address space of the program.

When a process is to be executed, its corresponding segmentation are loaded into non-contiguous memory though every segment is loaded into a contiguous block of available memory.

Segmentation memory management works very similar to paging but here segments are of variable-length where as in paging pages are of fixed size.

A program segment contains the program's main function, utility functions, data structures, and so on. The operating system maintains a segment map table for every process and a list of free memory blocks along with segment numbers, their size and corresponding memory locations in main memory. For each segment, the table stores the starting address of the segment and the length of the segment. A reference to a memory location includes a value that identifies a segment and an offset.

Segment Map Table

Operating System - Virtual Memory

A computer can address more memory than the amount physically installed on the system. This extra memory is actually called virtual memory and it is a section of a hard disk that's set up to emulate the computer's RAM.

The main visible advantage of this scheme is that programs can be larger than physical memory. Virtual memory serves two purposes. First, it allows us to extend the use of physical memory by using disk. Second, it allows us to have memory protection, because each virtual address is translated to a physical address.

Following are the situations, when entire program is not required to be loaded fully in main memory.

·         User written error handling routines are used only when an error occurred in the data or computation.

·         Certain options and features of a program may be used rarely.

·         Many tables are assigned a fixed amount of address space even though only a small amount of the table is actually used.

·         The ability to execute a program that is only partially in memory would counter many benefits.

·         Less number of I/O would be needed to load or swap each user program into memory.

·         A program would no longer be constrained by the amount of physical memory that is available.

·         Each user program could take less physical memory, more programs could be run the same time, with a corresponding increase in CPU utilization and throughput.

Modern microprocessors intended for general-purpose use, a memory management unit, or MMU, is built into the hardware. The MMU's job is to translate virtual addresses into physical addresses. A basic example is given below −

Virtual Memory

Virtual memory is commonly implemented by demand paging. It can also be implemented in a segmentation system. Demand segmentation can also be used to provide virtual memory.

Demand Paging

A demand paging system is quite similar to a paging system with swapping where processes reside in secondary memory and pages are loaded only on demand, not in advance. When a context switch occurs, the operating system does not copy any of the old program’s pages out to the disk or any of the new program’s pages into the main memory Instead, it just begins executing the new program after loading the first page and fetches that program’s pages as they are referenced.

Demand Paging

While executing a program, if the program references a page which is not available in the main memory because it was swapped out a little ago, the processor treats this invalid memory reference as a page fault and transfers control from the program to the operating system to demand the page back into the memory.

Advantages

Following are the advantages of Demand Paging −

·         Large virtual memory.

·         More efficient use of memory.

·         There is no limit on degree of multiprogramming.

Disadvantages

·         Number of tables and the amount of processor overhead for handling page interrupts are greater than in the case of the simple paged management techniques.

Page Replacement Algorithm

Page replacement algorithms are the techniques using which an Operating System decides which memory pages to swap out, write to disk when a page of memory needs to be allocated. Paging happens whenever a page fault occurs and a free page cannot be used for allocation purpose accounting to reason that pages are not available or the number of free pages is lower than required pages.

When the page that was selected for replacement and was paged out, is referenced again, it has to read in from disk, and this requires for I/O completion. This process determines the quality of the page replacement algorithm: the lesser the time waiting for page-ins, the better is the algorithm.

A page replacement algorithm looks at the limited information about accessing the pages provided by hardware, and tries to select which pages should be replaced to minimize the total number of page misses, while balancing it with the costs of primary storage and processor time of the algorithm itself. There are many different page replacement algorithms. We evaluate an algorithm by running it on a particular string of memory reference and computing the number of page faults,

Reference String

The string of memory references is called reference string. Reference strings are generated artificially or by tracing a given system and recording the address of each memory reference. The latter choice produces a large number of data, where we note two things.

·         For a given page size, we need to consider only the page number, not the entire address.

·         If we have a reference to a page p, then any immediately following references to page p will never cause a page fault. Page p will be in memory after the first reference; the immediately following references will not fault.

·         For example, consider the following sequence of addresses − 123,215,600,1234,76,96

·         If page size is 100, then the reference string is 1,2,6,12,0,0

First In First Out (FIFO) algorithm

·         Oldest page in main memory is the one which will be selected for replacement.

·         Easy to implement, keep a list, replace pages from the tail and add new pages at the head.

First In First Out

Optimal Page algorithm

·         An optimal page-replacement algorithm has the lowest page-fault rate of all algorithms. An optimal page-replacement algorithm exists, and has been called OPT or MIN.

·         Replace the page that will not be used for the longest period of time. Use the time when a page is to be used.

Optimal page replacement

Least Recently Used (LRU) algorithm

·         Page which has not been used for the longest time in main memory is the one which will be selected for replacement.

·         Easy to implement, keep a list, replace pages by looking back into time.

Least Recently Used

Page Buffering algorithm

·         To get a process start quickly, keep a pool of free frames.

·         On page fault, select a page to be replaced.

·         Write the new page in the frame of free pool, mark the page table and restart the process.

·         Now write the dirty page out of disk and place the frame holding replaced page in free pool.

Least frequently Used(LFU) algorithm

·         The page with the smallest count is the one which will be selected for replacement.

·         This algorithm suffers from the situation in which a page is used heavily during the initial phase of a process, but then is never used again.

Most frequently Used(MFU) algorithm

·         This algorithm is based on the argument that the page with the smallest count was probably just brought in and has yet to be used.

Operating System - I/O Hardware

One of the important jobs of an Operating System is to manage various I/O devices including mouse, keyboards, touch pad, disk drives, display adapters, USB devices, Bit-mapped screen, LED, Analog-to-digital converter, On/off switch, network connections, audio I/O, printers etc.

An I/O system is required to take an application I/O request and send it to the physical device, then take whatever response comes back from the device and send it to the application. I/O devices can be divided into two categories −

·         Block devices − A block device is one with which the driver communicates by sending entire blocks of data. For example, Hard disks, USB cameras, Disk-On-Key etc.

·         Character devices − A character device is one with which the driver communicates by sending and receiving single characters (bytes, octets). For example, serial ports, parallel ports, sounds cards etc

Device Controllers

Device drivers are software modules that can be plugged into an OS to handle a particular device. Operating System takes help from device drivers to handle all I/O devices.

The Device Controller works like an interface between a device and a device driver. I/O units (Keyboard, mouse, printer, etc.) typically consist of a mechanical component and an electronic component where electronic component is called the device controller.

There is always a device controller and a device driver for each device to communicate with the Operating Systems. A device controller may be able to handle multiple devices. As an interface its main task is to convert serial bit stream to block of bytes, perform error correction as necessary.

Any device connected to the computer is connected by a plug and socket, and the socket is connected to a device controller. Following is a model for connecting the CPU, memory, controllers, and I/O devices where CPU and device controllers all use a common bus for communication.

Device Controllers

Synchronous vs asynchronous I/O

·         Synchronous I/O − In this scheme CPU execution waits while I/O proceeds

·         Asynchronous I/O − I/O proceeds concurrently with CPU execution

Communication to I/O Devices

The CPU must have a way to pass information to and from an I/O device. There are three approaches available to communicate with the CPU and Device.

·         Special Instruction I/O

·         Memory-mapped I/O

·         Direct memory access (DMA)

Special Instruction I/O

This uses CPU instructions that are specifically made for controlling I/O devices. These instructions typically allow data to be sent to an I/O device or read from an I/O device.

Memory-mapped I/O

When using memory-mapped I/O, the same address space is shared by memory and I/O devices. The device is connected directly to certain main memory locations so that I/O device can transfer block of data to/from memory without going through CPU.

Memory-mapped I/O

While using memory mapped IO, OS allocates buffer in memory and informs I/O device to use that buffer to send data to the CPU. I/O device operates asynchronously with CPU, interrupts CPU when finished.

The advantage to this method is that every instruction which can access memory can be used to manipulate an I/O device. Memory mapped IO is used for most high-speed I/O devices like disks, communication interfaces.

Direct Memory Access (DMA)

Slow devices like keyboards will generate an interrupt to the main CPU after each byte is transferred. If a fast device such as a disk generated an interrupt for each byte, the operating system would spend most of its time handling these interrupts. So a typical computer uses direct memory access (DMA) hardware to reduce this overhead.

Direct Memory Access (DMA) means CPU grants I/O module authority to read from or write to memory without involvement. DMA module itself controls exchange of data between main memory and the I/O device. CPU is only involved at the beginning and end of the transfer and interrupted only after entire block has been transferred.

Direct Memory Access needs a special hardware called DMA controller (DMAC) that manages the data transfers and arbitrates access to the system bus. The controllers are programmed with source and destination pointers (where to read/write the data), counters to track the number of transferred bytes, and settings, which includes I/O and memory types, interrupts and states for the CPU cycles.

DMA

The operating system uses the DMA hardware as follows −

Step

Description

1

Device driver is instructed to transfer disk data to a buffer address X.

2

Device driver then instruct disk controller to transfer data to buffer.

3

Disk controller starts DMA transfer.

4

Disk controller sends each byte to DMA controller.

5

DMA controller transfers bytes to buffer, increases the memory address, decreases the counter C until C becomes zero.

6

When C becomes zero, DMA interrupts CPU to signal transfer completion.

Polling vs Interrupts I/O

A computer must have a way of detecting the arrival of any type of input. There are two ways that this can happen, known as polling and interrupts. Both of these techniques allow the processor to deal with events that can happen at any time and that are not related to the process it is currently running.

Polling I/O

Polling is the simplest way for an I/O device to communicate with the processor. The process of periodically checking status of the device to see if it is time for the next I/O operation, is called polling. The I/O device simply puts the information in a Status register, and the processor must come and get the information.

Most of the time, devices will not require attention and when one does it will have to wait until it is next interrogated by the polling program. This is an inefficient method and much of the processors time is wasted on unnecessary polls.

Compare this method to a teacher continually asking every student in a class, one after another, if they need help. Obviously the more efficient method would be for a student to inform the teacher whenever they require assistance.

Interrupts I/O

An alternative scheme for dealing with I/O is the interrupt-driven method. An interrupt is a signal to the microprocessor from a device that requires attention.

A device controller puts an interrupt signal on the bus when it needs CPU’s attention when CPU receives an interrupt, It saves its current state and invokes the appropriate interrupt handler using the interrupt vector (addresses of OS routines to handle various events). When the interrupting device has been dealt with, the CPU continues with its original task as if it had never been interrupted.

Operating System - I/O Softwares

I/O software is often organized in the following layers −

·         User Level Libraries − This provides simple interface to the user program to perform input and output. For example, stdio is a library provided by C and C++ programming languages.

·         Kernel Level Modules − This provides device driver to interact with the device controller and device independent I/O modules used by the device drivers.

·         Hardware − This layer includes actual hardware and hardware controller which interact with the device drivers and makes hardware alive.

A key concept in the design of I/O software is that it should be device independent where it should be possible to write programs that can access any I/O device without having to specify the device in advance. For example, a program that reads a file as input should be able to read a file on a floppy disk, on a hard disk, or on a CD-ROM, without having to modify the program for each different device.

I/O Softwares

Device Drivers

Device drivers are software modules that can be plugged into an OS to handle a particular device. Operating System takes help from device drivers to handle all I/O devices. Device drivers encapsulate device-dependent code and implement a standard interface in such a way that code contains device-specific register reads/writes. Device driver, is generally written by the device's manufacturer and delivered along with the device on a CD-ROM.

A device driver performs the following jobs −

·         To accept request from the device independent software above to it.

·         Interact with the device controller to take and give I/O and perform required error handling

·         Making sure that the request is executed successfully

How a device driver handles a request is as follows: Suppose a request comes to read a block N. If the driver is idle at the time a request arrives, it starts carrying out the request immediately. Otherwise, if the driver is already busy with some other request, it places the new request in the queue of pending requests.

Interrupt handlers

An interrupt handler, also known as an interrupt service routine or ISR, is a piece of software or more specifically a callback function in an operating system or more specifically in a device driver, whose execution is triggered by the reception of an interrupt.

When the interrupt happens, the interrupt procedure does whatever it has to in order to handle the interrupt, updates data structures and wakes up process that was waiting for an interrupt to happen.

The interrupt mechanism accepts an address ─ a number that selects a specific interrupt handling routine/function from a small set. In most architectures, this address is an offset stored in a table called the interrupt vector table. This vector contains the memory addresses of specialized interrupt handlers.

Device-Independent I/O Software

The basic function of the device-independent software is to perform the I/O functions that are common to all devices and to provide a uniform interface to the user-level software. Though it is difficult to write completely device independent software but we can write some modules which are common among all the devices. Following is a list of functions of device-independent I/O Software −

·         Uniform interfacing for device drivers

·         Device naming - Mnemonic names mapped to Major and Minor device numbers

·         Device protection

·         Providing a device-independent block size

·         Buffering because data coming off a device cannot be stored in final destination.

·         Storage allocation on block devices

·         Allocation and releasing dedicated devices

·         Error Reporting

User-Space I/O Software

These are the libraries which provide richer and simplified interface to access the functionality of the kernel or ultimately interactive with the device drivers. Most of the user-level I/O software consists of library procedures with some exception like spooling system which is a way of dealing with dedicated I/O devices in a multiprogramming system.

I/O Libraries (e.g., stdio) are in user-space to provide an interface to the OS resident device-independent I/O SW. For example putchar(), getchar(), printf() and scanf() are example of user level I/O library stdio available in C programming.

Kernel I/O Subsystem

Kernel I/O Subsystem is responsible to provide many services related to I/O. Following are some of the services provided.

·         Scheduling − Kernel schedules a set of I/O requests to determine a good order in which to execute them. When an application issues a blocking I/O system call, the request is placed on the queue for that device. The Kernel I/O scheduler rearranges the order of the queue to improve the overall system efficiency and the average response time experienced by the applications.

·         Buffering − Kernel I/O Subsystem maintains a memory area known as buffer that stores data while they are transferred between two devices or between a device with an application operation. Buffering is done to cope with a speed mismatch between the producer and consumer of a data stream or to adapt between devices that have different data transfer sizes.

·         Caching − Kernel maintains cache memory which is region of fast memory that holds copies of data. Access to the cached copy is more efficient than access to the original.

·         Spooling and Device Reservation − A spool is a buffer that holds output for a device, such as a printer, that cannot accept interleaved data streams. The spooling system copies the queued spool files to the printer one at a time. In some operating systems, spooling is managed by a system daemon process. In other operating systems, it is handled by an in kernel thread.

·         Error Handling − An operating system that uses protected memory can guard against many kinds of hardware and application errors.

Operating System - File System

File

A file is a named collection of related information that is recorded on secondary storage such as magnetic disks, magnetic tapes and optical disks. In general, a file is a sequence of bits, bytes, lines or records whose meaning is defined by the files creator and user.

File Structure

A File Structure should be according to a required format that the operating system can understand.

·         A file has a certain defined structure according to its type.

·         A text file is a sequence of characters organized into lines.

·         A source file is a sequence of procedures and functions.

·         An object file is a sequence of bytes organized into blocks that are understandable by the machine.

·         When operating system defines different file structures, it also contains the code to support these file structure. Unix, MS-DOS support minimum number of file structure.

File Type

File type refers to the ability of the operating system to distinguish different types of file such as text files source files and binary files etc. Many operating systems support many types of files. Operating system like MS-DOS and UNIX have the following types of files −

Ordinary files

·         These are the files that contain user information.

·         These may have text, databases or executable program.

·         The user can apply various operations on such files like add, modify, delete or even remove the entire file.

Directory files

·         These files contain list of file names and other information related to these files.

Special files

·         These files are also known as device files.

·         These files represent physical device like disks, terminals, printers, networks, tape drive etc.

These files are of two types −

·         Character special files − data is handled character by character as in case of terminals or printers.

·         Block special files − data is handled in blocks as in the case of disks and tapes.

File Access Mechanisms

File access mechanism refers to the manner in which the records of a file may be accessed. There are several ways to access files −

·         Sequential access

·         Direct/Random access

·         Indexed sequential access

Sequential access

A sequential access is that in which the records are accessed in some sequence, i.e., the information in the file is processed in order, one record after the other. This access method is the most primitive one. Example: Compilers usually access files in this fashion.

Direct/Random access

·         Random access file organization provides, accessing the records directly.

·         Each record has its own address on the file with by the help of which it can be directly accessed for reading or writing.

·         The records need not be in any sequence within the file and they need not be in adjacent locations on the storage medium.

Indexed sequential access

·         This mechanism is built up on base of sequential access.

·         An index is created for each file which contains pointers to various blocks.

·         Index is searched sequentially and its pointer is used to access the file directly.

Space Allocation

Files are allocated disk spaces by operating system. Operating systems deploy following three main ways to allocate disk space to files.

·         Contiguous Allocation

·         Linked Allocation

·         Indexed Allocation

Contiguous Allocation

·         Each file occupies a contiguous address space on disk.

·         Assigned disk address is in linear order.

·         Easy to implement.

·         External fragmentation is a major issue with this type of allocation technique.

Linked Allocation

·         Each file carries a list of links to disk blocks.

·         Directory contains link / pointer to first block of a file.

·         No external fragmentation

·         Effectively used in sequential access file.

·         Inefficient in case of direct access file.

Indexed Allocation

·         Provides solutions to problems of contiguous and linked allocation.

·         A index block is created having all pointers to files.

·         Each file has its own index block which stores the addresses of disk space occupied by the file.

·         Directory contains the addresses of index blocks of files.

Operating System - Security

Security refers to providing a protection system to computer system resources such as CPU, memory, disk, software programs and most importantly data/information stored in the computer system. If a computer program is run by an unauthorized user, then he/she may cause severe damage to computer or data stored in it. So a computer system must be protected against unauthorized access, malicious access to system memory, viruses, worms etc. We're going to discuss following topics in this chapter.

·         Authentication

·         One Time passwords

·         Program Threats

·         System Threats

·         Computer Security Classifications

Authentication

Authentication refers to identifying each user of the system and associating the executing programs with those users. It is the responsibility of the Operating System to create a protection system which ensures that a user who is running a particular program is authentic. Operating Systems generally identifies/authenticates users using following three ways −

·         Username / Password − User need to enter a registered username and password with Operating system to login into the system.

·         User card/key − User need to punch card in card slot, or enter key generated by key generator in option provided by operating system to login into the system.

·         User attribute - fingerprint/ eye retina pattern/ signature − User need to pass his/her attribute via designated input device used by operating system to login into the system.

One Time passwords

One-time passwords provide additional security along with normal authentication. In One-Time Password system, a unique password is required every time user tries to login into the system. Once a one-time password is used, then it cannot be used again. One-time password are implemented in various ways.

·         Random numbers − Users are provided cards having numbers printed along with corresponding alphabets. System asks for numbers corresponding to few alphabets randomly chosen.

·         Secret key − User are provided a hardware device which can create a secret id mapped with user id. System asks for such secret id which is to be generated every time prior to login.

·         Network password − Some commercial applications send one-time passwords to user on registered mobile/ email which is required to be entered prior to login.

Program Threats

Operating system's processes and kernel do the designated task as instructed. If a user program made these process do malicious tasks, then it is known as Program Threats. One of the common example of program threat is a program installed in a computer which can store and send user credentials via network to some hacker. Following is the list of some well-known program threats.

·         Trojan Horse − Such program traps user login credentials and stores them to send to malicious user who can later on login to computer and can access system resources.

·         Trap Door − If a program which is designed to work as required, have a security hole in its code and perform illegal action without knowledge of user then it is called to have a trap door.

·         Logic Bomb − Logic bomb is a situation when a program misbehaves only when certain conditions met otherwise it works as a genuine program. It is harder to detect.

·         Virus − Virus as name suggest can replicate themselves on computer system. They are highly dangerous and can modify/delete user files, crash systems. A virus is generatlly a small code embedded in a program. As user accesses the program, the virus starts getting embedded in other files/ programs and can make system unusable for user

System Threats

System threats refers to misuse of system services and network connections to put user in trouble. System threats can be used to launch program threats on a complete network called as program attack. System threats creates such an environment that operating system resources/ user files are misused. Following is the list of some well-known system threats.

·         Worm − Worm is a process which can choked down a system performance by using system resources to extreme levels. A Worm process generates its multiple copies where each copy uses system resources, prevents all other processes to get required resources. Worms processes can even shut down an entire network.

·         Port Scanning − Port scanning is a mechanism or means by which a hacker can detects system vulnerabilities to make an attack on the system.

·         Denial of Service − Denial of service attacks normally prevents user to make legitimate use of the system. For example, a user may not be able to use internet if denial of service attacks browser's content settings.

Computer Security Classifications

As per the U.S. Department of Defense Trusted Computer System's Evaluation Criteria there are four security classifications in computer systems: A, B, C, and D. This is widely used specifications to determine and model the security of systems and of security solutions. Following is the brief description of each classification.

S.N.

Classification Type & Description

1

Type A

Highest Level. Uses formal design specifications and verification techniques. Grants a high degree of assurance of process security.

2

Type B

Provides mandatory protection system. Have all the properties of a class C2 system. Attaches a sensitivity label to each object. It is of three types.

·         B1 − Maintains the security label of each object in the system. Label is used for making decisions to access control.

·         B2 − Extends the sensitivity labels to each system resource, such as storage objects, supports covert channels and auditing of events.

·         B3 − Allows creating lists or user groups for access-control to grant access or revoke access to a given named object.

3

Type C

Provides protection and user accountability using audit capabilities. It is of two types.

·         C1 − Incorporates controls so that users can protect their private information and keep other users from accidentally reading / deleting their data. UNIX versions are mostly Cl class.

·         C2 − Adds an individual-level access control to the capabilities of a Cl level system.

4

Type D

Lowest level. Minimum protection. MS-DOS, Window 3.1 fall in this category.