Summary Blog Post

 

During these past five weeks, the class has learned an amazing amount of information regarding operating systems theory and design. It was no easy task but with the help of classmates, a better understanding of all five sections comes to light. When understanding operating systems, first is to understand the features and structure of it which are:

·         Security

o    A function that prevents unauthorized access which protects user’s data.

·         System Performance Control

o    A function that monitors the overall health of the system and gives advice to improve performance.

·         Job Accounting

o    A function that logs user’s tasks, time, and resources.

·         Software/User Coordination

o    A function that communicates and coordinates between software and users.

·         Memory Management

o    A function that manages primary and main memory.

·         Processor Management

o    A function that schedules processes to be accessed to the processor.

·         Device Management

o    A function that manages all devices connected to the system.

·         File Management

o    A function that organizes files based on directories


Now that there is a better understanding of the features and the structure of contemporary operating systems, we need to figure out how operating systems enable processes to share and exchange information. This type of communication is better learned when trying to understand what processes, process states, and process control blocks are. The following information will help with that.

·         Process: A program in the process of execution which may include:

o    Stack – Contains temporary data.

o    Heap – Contains dynamically allocated memory at a time which a process is run.

o    Data – Contains global variables.

o    Text – Contains program code.

·         Process State: A state in which a process executes. The following states are:

o    New – Creation of a new process

o    Running – The instructions of that process being executed.

o    Waiting – The process is waiting as it cycles through states.

o    Ready – The process waits for an assigned task to a processor.

o    Terminated – The execution is complete for the process.

·         Process Control Block: A representation of each process which contains the following:

o    Process State – A state in which a process executes (new, running, waiting, ready, etc.).

o    Process Counter – A counter that has the next instructions address to be executed.

o    CPU Registers – Depending on the architecture of the computer, the registers may include, general-purpose registers, stack pointers, condition-code information, index registers, and accumulators.

o    CPU-scheduling Information – A schedule of different parameters (process priorities, queues, etc.).

o    Memory-management Information – Depending on the memory system, this is a management of information that may include, segment tables, page tables, limit registers.

o    Accounting Information – Information that include, account numbers, numbers of job or processes, time limits, etc.).

o    I/O Status Information – Information that include a list of allocated I/O devices on a process, open flies, etc.)

According to Silberschatz, Galvin, & Gagne, a single-thread performs only one task at a time, while the multi-thread which most modern operating systems use, perform multiple tasks at a time (2014, p.163). If we take a look at how these process, a single-thread will work when performing certain tasks but when those tasks get interrupted, the task does not finish until those interruptions are fixed which may take a very long time. But with multi-thread, multiple threads can be created with multiple tasks being accomplished as the tasks can perform in parallel. When comparing models, we look at different relationships between user and kernel threads. These models are as follows:

·         Many-to-One Model – A model that maps multiple user threads into a single kernel thread. As efficient it may be, the multiple user threads are not able to run parallel which means, only one thread can access the kernel at a time.

·         One-to-One Model – A model that maps each user thread into a single kernel thread. This type of model does allow parallel runs with multiple threads but when creating another user thread, a kernel thread must be on the other end of it.

·         Many-to-Many Model – A model that has multiple user and kernel threads. The amount of user threads is equal or greater than the kernel threads. This model does allow parallel runs with multiple threads and does not have the issues that model many-to-one or one-to-one have.

The critical-section problem is a protocol that basically doesn’t allow any processes to execute in critical section mode if one is already on its critical section mode. This is to prevent two processes with priority to execute at the same time. The issue is that this will lead to inconsistent states of data. The solution to this problem would be software that utilize locks. After doing research, Abdul M. mentioned several methods that are used in software like, the mutex locks which provide functions to execute atomically, test_and_set which sets parameters value to ‘true’ using shared boolean variables and instructions that execute atomically, and condition variables which use a queue process when processes wait to enter the critical section (n.d.). With these methods in mind, solutions will be addressing the following satisfied requirements which are:

·         Mutual Exclusion – When a process is executing in critical section, no other processes can.

·         Progress – When there is no process executing in critical section, then the ones that are not in their remainder sections can enter in critical section.

·         Bounded Waiting – There must be limits on the number of times a process is allowed to enter in critical section.

The objectives and functions of memory management in operating systems to baby sit all memory. This means, deciding how/when/where processes get memory and how much of it. They also keep track of all the locations if its free or allocated. The purpose of this is to allow the operating system, running processes, and applications to have the memory needed to operate. Of course, this helps with basic hardware (RAM/caches) that help speed up memory access and then the protection it can provide when accessing memory at those speeds to make sure operations run smoothly (Silberschatz et al., 2014).

Virtual address or logical address is generated by the Central Processing Unit (CPU) and the space is all virtual addresses generated by the CPU regarding a program. Physical address is kept within the memory unit and the space is all physical addresses that corresponded with the virtual addresses in the virtual address space. With the help of the memory management unit, virtual address is mapped to its corresponding physical address.

 


A file system management is a type of software that manages data file operations like storing and organizing within a computer system. A way to handle this is by having functions like creating, deleting, and even updating data files such as: files, directories, and archives. Another function to consider is mapping which maps directories to their physical locations. Directories have different structures like, single-level directory, two-level directory, and tree-structured directory that perform certain operations like, searching, creating, deleting, renaming, and listing files. Last function to mention is file back-up which is like a fail-safe which lowers the risk of data loss (Silberschatz et al., 2014, p. 492-496). A few examples of devices that have this type of file systems are as follows: hard drives, optical drives, and flash drives.

Different types of input/output devices are as follows: keyboard, mouse, microphone, monitor, printer, computer speakers, etc. The differences between them both is hardware layers are physical components and software layers are essentially telling those components what to do. Hardware cannot really do any operations without the software and without hardware, software applications cant really be executed. When it comes to corruption and damages, the hardware has no effect unless damaged, then it would have to be replaced but on the other hand, software can be corrupted but if it were damaged, there are back-up copies that may be reinstalled. The integration across I/O and memory components is with device controllers being connected through a common bus. Because these I/O devices are connected through a common bus, they are provided access to shared memory.

 


As modern computing evolves, so does the protection and security when trying to access programs or resources. The following protection in modern computer systems are as follows:

·         Goals of Protection (13.1)

o    The goal is to protect computer systems integrity as time passes and they become more sophisticated. One of the main reasons for protection is to protect against threats. Other reasons can be to enforce policies and detect errors (p. 601-602).

·         Principles of Protection (13.2)

o    Principle of least privilege which means, limitations on privileges to programs, users, and systems. Enough privilege to perform tasks but not enough to compromise the integrity (p. 602-603).

·         Domain of Protection (13.3)

o    The guardian of resources and the means to access them. As these resources are accessed through objects whether they are hardware objects or software objects, only certain processes that have authorization can be allowed to access them. Just like the principle of least privilege, the need-to-know principle is another limiting tool towards a system being damaged by a faulty process (p. 603-604).

·         Language-Based Protection (13.9)

o    A protection to strengthen the security of applications by inspecting and validating attempted access to protected resources. Just like the goals of protection, as computers become more complex, so do high-level applications in which protection will need to be more refined (p. 620).

The access matrix is used to protect specific resources a process can access by implementing policy decisions on which protection to involve and which domain to execute processes from. Each entry within the access matrix is created by a user and can even be modified individually. These modifications can give certain rights to certain column on any given domain (Silberschatz et al., 2014, p. 609). Below is an example of an access matrix.

Security is used to protect programs, systems, and networks from threats by implementing security defenses. These defenses include security policies, vulnerability assessments, intrusion detections, virus protections, auditing, accounting, logging (Silberschatz et al., 2014, p. 665-672).



In conclusion, everything that was learned during the past five weeks will help in future courses and even future jobs. That is because whether someone was to pursue computer software technology or cyber security, there is a need to understand the operating system features and structure. For instance, for cyber security it would benefit to understand the operating system feature of protection and security to better suit their career path and find those vulnerabilities. For computer software technology, it would benefit to understand the operating system feature of software/user coordination, processor management, and memory management as they solve real-world problems by creating and designing software for computers and applications. It is important for everyone to understand this information because things can go unnoticed, and it will only lead to more vulnerabilities.

References

Monum, A. (n.d.). What is the critical section problem in operating systems? Educative. https://www.educative.io/answers/what-is-the-critical-section-problem-in-operating-systems (Links to an external site.)

Silberschatz, A., Galvin, P. B., & Gagne, G. (2014). Operating system concepts essentials (2nd ed.). Retrieved from https://redshelf.com/

 

Comments

Popular Posts