Wednesday, January 16, 2008

Exercises (1-6)
1. A deadlock is a situation wherein two or more competing actions are waiting for the other to finish, and thus neither ever does.
A Starvation is similar in effect to deadlock. Two or more programs become deadlocked together, when each of them wait for a resource occupied by another program in the same set. On the other hand, one or more programs are in starvation, when each of them is waiting for resources that are occupied by programs, that may or may not be in the same set that are starving. Moreover, in a deadlock, no program in the set changes its state.
A race is a synchronozation problem between two processes vying for the same resources.
2. Real-life example of deadlock is the traffic.Real-life example of starvation is in the staircase, when two people meet at the opposing side. The staircase can be accomodated by one person.Real-life example of Race is when two people arrived at the same time on the same door.
3. Four Necessary Conditions for Deadlock:The presence of deadlock in a systems is characterized by these four necessary conditions. The term necessary means that if there is deadlock then all four must be present.a. Mutual exclusive resource access - A resource acquired is held exclusively, i.e., it is not shared by other processes.b. No preemption - A process' resources cannot be taken away from it. Only the process can give up its resources.c. Hold and Wait - A process has some resources and is blocked requesting more.d. Circularity - This means that there is a circular chain of two or more processes in which the resources needed by one process are held by the next process.
4. Algorithm for prevention of deadlock and starvation:public boolean tryAcquire( int n0, int n1, ... ) { if ( for all i: ni ≤ availi ) { // successful acquisition availi -= ni for all i; return true; // indicate success } else return false; // indicate failure}init) Semaphore s = new Semaphore(1,1);Thread A Thread B-------- --------s.acquire(1,0); s.acquire(0,1);s.acquire(0,1); s.acquire(1,0);Thread B--------while(true) {s.acquire(0,1);if ( s.tryAcquire(1,0) ) // if second acquisition succeedsbreak; // leave the loopelse {s.release(0,1); // release what is heldsleep( SOME_AMOUNT); // pause a bit before trying again}}run action s.value--- ------ -------(1,1)A s.acquire(1,0) (0,1)B s.acquire(0,1) (0,0)A s.acquire(0,1) A blocks on secondB s.tryAcquire(1,0) => falseB s.release(0,1) (0,1)A s.acquire(0,1) (0,0) A succeeds on second
5.
a. Deadlock can be occurred. When the bridge is destroyed.
b. When there are no traffic lights.
c. To prevent deadlock make all the road to be one-way.
6.
a. This is not a deadlocked.
b. There is no blocked processes.
c. P2 can freely request on R1 and R2.
d. P1 can freely request on R1 and R2.e. Both P1 and P2 have requested R2.1. P1 will wait after the request of P2.2. P2 will wait after the request of P1

Wednesday, November 28, 2007

Operating System

Learn about using a UNIX system as a primary domain controller (PDC) and file repository, including an anonymous, read-only shared area accessible by anyone with a Web browser. To be a good citizen on your local network, you need to integrate your favorite UNIX system with the networking features of client systems, generally running Windows® XP or Mac OS X. This makes it easier for the users of those workstations to take advantage of the centralized authentication and storage facilities you can provide.Learn about using a UNIX system as a primary domain controller and file repository, including an anonymous, read-only shared area accessible by anyone with a Web browser. To be a good citizen on your local network, you need to integrate your favorite UNIX system with the networking features of client systems, generally running Windows XP or Mac OS X. This makes it easier for the users of those workstations to take advantage of the centralized authentication and storage facilities you can provide.I'm off to IDUG in Berlin next week and one of my presentations is titled "DB2 UDB for Linux, UNIX, Windows for a z/OS DBA". For those of you that can't make it to Berlin I thought I would create a series of blog postings that will give a z/OS DBA the Rosetta stone for DB2 on Linux, UNIX, Windows. Some competitors out there like to say "DB2 is not DB2 is not DB2", implying that DB2 on z/OS is not the same as DB2 on Linux, UNIX, Windows. I'll be quite blunt and say that DB2 on z/OS is a different code base from DB2 on Linux, UNIX, Windows. Why? Well back in the late 80s early 90s when DB2 on the distributed platform was being developed, it made no sense to try to "port" DB2 for z/OS. That code base is written primarily in assembler language which as you can imagine is great on z/OS and delivers outstanding performance due to the tight integration with the OS. But assembler does not port and when you build a database that is going to run on distributed systems (AIX, HP-UX, Solaris, Linux, Windows) you need a code base that is portable (like C or C++). Why, as a customer, would you care about what language your application is written in? You shouldn't care as long as you can take your own application and run it on the platform you prefer. So the design goal of the DB2 team is to be able to develop an application on any member of the DB2 family and deploy it on any other member. The IBM organization has a SQL Language council to ensure that any SQL going into any DB2 family member is identical so that applications can be easily ported across platforms. Now to be fair, there are differences but they are primarily based on the operating system and storage infrastructure that the DB2 database is running on. So as an administrator you will see some differences which I will go through in these posts. But to be fair, if you are running DB2 on z/OS, your administration skills are more easily ported to DB2 on Linux, UNIX, Windows then they will be to any other DBMS. Why? Because we have a joint architecture board that leverages the skills of all of the architects of DB2 across all platforms to leverage the best of all worlds when designing new features. The timeline for when the features are released on each platform may vary based on customer requirements but the features as they are delivered will be similar or identical depending on the requirements on the OS layer.

Virtual memory in UNIXVirtual memory is an internal “trick” that relies on the fact that not every executing task is always referencing it’s RAM memory region. Since all RAM regions are not constantly in-use, UNIX has developed a paging algorithm that move RAM memory pages to the swap disk when it appears that they will not be needed in the immediate future.
RAM demand paging in UNIXAs memory regions are created, UNIX will not refuse a new task whose RAM requests exceeds the amount of RAM. Rather, UNIX will page out the least recently referenced RAM memory page to the swap disk to make room for the incoming request. When the physical limit of the RAM is exceeded UNIX can wipe-out RAM regions because they have already been written to the swap disk.When the RAM region is been removed to swap, any subsequent references by the originating program require UNIX copy page in the RAM region to make the memory accessible. UNIX page in operations involve disk I/O and are a source of slow performance. Hence, avoiding UNIX page in operations is an important concern for the Oracle DBA.
The Main functions of paging are performed when a program tries to access pages that do not currently reside in RAM, a situation causing page fault:
1. Handles the page fault, in a manner invisible to the causing program, and takes control.
2. Determines the location of the data in auxiliary storage.
3. Determines the page frame in RAM to use as a container for the data.
4. If a page currently residing in chosen frame has been modified since loading (if it is dirty), writes the page to auxiliary storage.
5. Loads the requested data into the available page.
6. Returns control to the program, transparently retrying the instruction that caused page fault.
The need to reference memory at a particular address arises from two main sources:
Processor trying to load and execute a program's instructions itself.
Data being accessed by a program's instruction.
In step 3, when a page has to be loaded and all existing pages in RAM are currently in use, one of the existing pages must be swapped with the requested new page. The paging system must determine the page to swap by choosing a one that is least likely to be needed within a short time. There are various page replacement algorithms that try to answer such issue.
Most operating systems use the least recently used (LRU) page replacement algorithm. The theory behind LRU is that the least recently used page is the most likely one not to be needed shortly; when a new page is needed, the least recently used page is discarded. This algorithm is most often correct but not always: e.g. a sequential process moves forward through memory and never again accesses the most recently used page.
Most programs that become active reach a steady state in their demand for memory locality both in terms of instructions fetched and data being accessed. This steady state is usually much less than the total memory required by the program. This steady state is sometimes referred to as the working set: the set of memory pages that are most frequently accessed.
Virtual memory systems work most efficiently when the ratio of the working set to the total number of pages that can be stored in RAM is low enough to minimize the number of page faults. A program that works with huge data structures will sometimes require a working set that is too large to be efficiently managed by the page system resulting in constant page faults that drastically slow down the system. This condition is referred to as thrashing: a page is swapped out and then accessed causing frequent faults.
An interesting characteristic of thrashing is that as the working set grows, there is very little increase in the number of faults until the critical point, when faults go up dramatically and majority of system's processing power is spent on handling them.
Virtual memory is an internal “trick” that relies on the fact that not every executing task is always referencing it’s RAM memory region. Since all RAM regions are not constantly in-use, UNIX has developed a paging algorithm that move RAM memory pages to the swap disk when it appears that they will not be needed in the immediate future.
RAM demand paging in UNIXAs memory regions are created, UNIX will not refuse a new task whose RAM requests exceeds the amount of RAM. Rather, UNIX will page out the least recently referenced RAM memory page to the swap disk to make room for the incoming request. When the physical limit of the RAM is exceeded UNIX can wipe-out RAM regions because they have already been written to the swap disk.When the RAM region is been removed to swap, any subsequent references by the originating program require UNIX copy page in the RAM region to make the memory accessible. UNIX page in operations involve disk I/O and are a source of slow performance. Hence, avoiding UNIX page in operations is an important concern for the Oracle DBA.
The Main functions of paging are performed when a program tries to access pages that do not currently reside in RAM, a situation causing page fault:
1. Handles the page fault, in a manner invisible to the causing program, and takes control.
2. Determines the location of the data in auxiliary storage.
3. Determines the page frame in RAM to use as a container for the data.
4. If a page currently residing in chosen frame has been modified since loading (if it is dirty), writes the page to auxiliary storage.
5. Loads the requested data into the available page.
6. Returns control to the program, transparently retrying the instruction that caused page fault.
The need to reference memory at a particular address arises from two main sources:
Processor trying to load and execute a program's instructions itself.
Data being accessed by a program's instruction.
In step 3, when a page has to be loaded and all existing pages in RAM are currently in use, one of the existing pages must be swapped with the requested new page. The paging system must determine the page to swap by choosing a one that is least likely to be needed within a short time. There are various page replacement algorithms that try to answer such issue.
Most operating systems use the least recently used (LRU) page replacement algorithm. The theory behind LRU is that the least recently used page is the most likely one not to be needed shortly; when a new page is needed, the least recently used page is discarded. This algorithm is most often correct but not always: e.g. a sequential process moves forward through memory and never again accesses the most recently used page.
Most programs that become active reach a steady state in their demand for memory locality both in terms of instructions fetched and data being accessed. This steady state is usually much less than the total memory required by the program. This steady state is sometimes referred to as the working set: the set of memory pages that are most frequently accessed.
Virtual memory systems work most efficiently when the ratio of the working set to the total number of pages that can be stored in RAM is low enough to minimize the number of page faults. A program that works with huge data structures will sometimes require a working set that is too large to be efficiently managed by the page system resulting in constant page faults that drastically slow down the system. This condition is referred to as thrashing: a page is swapped out and then accessed causing frequent faults.
An interesting characteristic of thrashing is that as the working set grows, there is very little increase in the number of faults until the critical point, when faults go up dramatically and majority of system's processing power is spent on handling
Windows, in addition to the RAM, uses a part or parts of the hard disk for storing temporary files and information. These are data that are not required immediately. For example, when you minimize a window, or have an application running in the background. Although Windows management of the virtual memory has grown more efficient, it still tends to access the hard disk very often. Most times absolutely unnecessarily, because it is programmed to keep the RAM free. With a little bit of tricking, you can optimize this access, not only making sure thatWindows uses this feature sparingly and sensibly but speeding up file access generally.
Virtual Memory in Windows NT
The virtual-memory manager (VMM) in Windows NT is nothing like the memory managers used in previous versions of the Windows operating system. Relying on a 32-bit address model, Windows NT is able to drop the segmented architecture of previous versions of Windows. Instead, the VMM employs 32-bit virtual addresses for directly manipulating the entire 4-GB process. At first this appears to be a restriction because, without segment selectors for relative addressing, there is no way to move a chunk of memory without having to change the address that references it. In reality, the VMM is able to do exactly that by implementing virtual addresses. Each application is able to reference a physical chunk of memory, at a specific virtual address, throughout the life of the application. The VMM takes care of whether the memory should be moved to a new location or swapped to disk completely independently of the application, much like updating a selector entry in the local descriptor table (LDT).
Windows versions 3.1 and earlier employed a scheme for moving segments of memory to other locations in memory both to maximize the amount of available contiguous memory and to place executable segments in the location where they could be executed. An equivalent operation is unnecessary in Windows NT's virtual memory management system for three reasons. One, code segments are no longer required to reside in the 0-640K range of memory in order for Windows NT to execute them. Windows NT does require that the hardware have at least a 32-bit address bus, so it is able to address all of physical memory, regardless of location. Two, the VMM virtualizes the address space such that two processes can use the same virtual address to refer to distinct locations in physical memory. Virtual address locations are not a commodity, especially considering that a process has 2 GB available for the application. So, each process may use any or all of its virtual addresses without regard to other processes in the system. Three, contiguous virtual memory in Windows NT can be allocated discontiguously in physical memory. So, there is no need to move chunks to make room for a large allocation.

Thursday, November 22, 2007

OPERATING SYSTEM

1.Two cases of what the author considers grave misconduct in journal reviewing led him to consider how we could improve how journals review submissions. He wanted to treat anonymous peer reviewing as a given because no reasonable reengineering of the review process seems to have proposed a workable alternative. Although all the author's data derives from personal experience, the sample is not small, amounting to around 30 rejected submissions to journals and conferences. In each case, he carefully distilled the reviewers' comments to gauge if they constituted constructive reviewing.
Effective communication of technical work is the primary goal of the technical journal. This essay provides information about the IBM Systems Journal and offers guidelines for prospective authors. The Systems Journal and its audience are described, and the processing of papers is discussed, along with suggestions for content and structure. To further aid the writer in preparing clear, complete papers of high quality, we include a bibliography of technical writing references.The Journal welcomes submissions from members of the worldwide professional and academic community who are interested in advances in software and systems. The following guide for authors was published in Volume 33, Number 4 (1994) of the IBM Systems Journal and is available as a reprint.
To order the reprint version, make note of the reprint order number given above, and consult IBM Systems Journal subscription and ordering information.

2.The regional bank migth deciede that buy a six server computer insted of one supercomputer.Six RLX ServerBlades in a standard 1U chassis and complete every phase of product design—overall project plan, system architecture, specifications development, system detail design, risk analysis and mitigation, two prototype builds and testing, environmental testing, regulatory and compliance testing, and transition to pilot production—in five months than upercomputers are used for highly calculation-intensive tasks such as problems involving quantum mechanical physics, weather forecasting, climate research (including research into global warming), molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), physical simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear weapons, and research into nuclear fusion), cryptanalysis, and the like. Major universities, military agencies and scientific research laboratories are heavy users.