each process is given a “quantum”, if the process blocks or finishes within this time interval, the next process is switched to. if it’s still executing at the end of the quantum. it is blocked and the next process is loaded
the part that requires more attention is this “quantum”. If it is too small, the time taken to switch tot the next process can mean that in order to execute a number of short tasks, with a very small quantum. the total execution time will far exceed if we were to run each of them after the other
[unsure about above wording, needs redoing]
round robin does not take into account that process A might not need to be completed in a shorter amount of time automatic email checking, update checking and other various tasks probably aren’t as important as a video player application the user had running. for this, we might want to set every programming running a priority.
1/f
where f is the percentage of the last
quantum used.sometimes in interactive systems it’s nice to get a feeling of responsibility. executing the shortest job first will always give the shortest average response time. the main trouble with this is figuring out how long a process will take to run.
if we think of the time between waiting to be individual jobs we can take the general average (with some potential weighting on the more recent ‘jobs’) this way if a process changes in terms of its CPU utilization in recent iterations of the “jobs”
in multi user systems, if user 9 has a process and user 2 has 3. in a normal round
robin, this could result in user 1 having far more CPU time. FSS
is when we take
this into account, ensuring that each user gets a “fair” share. if we say that user 2
should get 50% of CPU time, the scheduler will guarantee this, if it has processes to
run.
0 for RTS we need to guarantee that things happen at a given time. e.g. medical devices deliver medicine and correct times and correct rates/machines on factory floors
where M is the number of periodic events and i is an event that occurs with period and with seconds of CPU time
high speed memory is in great demand from all programs running on computers. unfortunately it is a finite resource, so managing what is loaded into RAM at any given moment is something that requires great attention and care.
the part of the OS that manages this complex juggling act is known as the “memory manager”
originally there was no abstraction on top of memory. all programs had access to the systems entire physical memory. when trying to run multiple programs, this can very quickly lead to large issues. lets say that two programs running are trying to write/read to the same address, it’s clear that this can lead to undefined behavior and unintended consequences.
one thought was to segment memory into physical “arenas”, 1 for OS, 1 for device drivers and another for user programs.
we could put all programs into one process with multiple threads. this way memory sharing is assured (obviously we probably don’t want to do this, unrelated programs should not share memory as this is a clear security risk)
it’s possible to run multiple programs if when switching to another process, we swap entire contents of the user program memory arena to disk. funnily enough we call this “swapping”
with the addition of some special hardware we can start to devise ways of keeping multiple programs in memory at once.
divide all memory into chunks, assigning each chunk a unique key to protect it from being written to/read from other chunks.
the problem with the protection key is, programs may take up multiple chunks, so we can protect other chunks from jumping into other programs chunks. and since addresses are static, a program may not know where in memory it is loaded, making bugs like this a lot easier to occur.
we could program in such a way that each address can be added so some kind of base address number, so that no memory is never addressed absolutely at compile time. this is known as “static reallocation”
these days it’s very rare for a system not to have memory abstraction, it’s too large of an issue to not have a standard solution for. and from the different potential solutions discussed as of yet. 2 issues must be solved in the abstraction re-allocation and protection
each process has it’s own addresses space, independent of another
the concept of address spaces is not unique to computer architecture, phone extensions such as “+44”, top level domains like “.com” and even road names are all examples of address spaces.
address 28 in one space, has a completely different physical address then 28 in another
in order to arrange these memory spaces we can use “base and limit registers”
when a process is loaded it’s “base” and “limit” registers are also loaded, these contain the processes start position in memory, and it’s length respectively
this way, when address 28 is referenced, is added to the base, and checked to make sure that to ensure protection of other spaces.
the disadvantage to this, is that whenever locations in memory are references an addition and a comparison must be completed
realistically the number of running programs at any given moment will probably exceed the capacity of system RAM. dealing with memory overload is something all systems will have to handle. swapping out programs is one potential solution.
if a process is idle, it’s written to disk, freeing it’s section in memory.
this process can often leave holes in the memory stack, where it once was.
to deal with these holes, the OS can run a process known as “memory compaction” to remove the wholes, but this is a high CPU intensive operation and is not done often because of that.
we must also consider that each process can grow in it’s memory image. for instance photoshop may start at 500mb, put during usage can easily grow to multiple gigabytes
another way to manage memory is virtual memory where programs are still able to run when partially or fully written to disk. this will be covered in another week.
the first section of each node on the list marks if it is a hole (h) or a process (p), then is the start position of the chunk, then the length of it
in this example that the list is sorted via start position in physical memory
it may be more convenient to use a double linked list so that figuring out if merges in holes are possible
we can use a number of different algorithms to decide where in the list to slot a new process: first fit, next fit, best fit, worst fit and quick fit.
works the same way as best fit, but instead of selecting the hole closest in size, will instead select the largest hole. this is also not a good algorithm to choose
we can also seperate the linked lists, one for holes and another for programs, this makes us able to sort the holes in size instead of position, and also makes it so that first fit and best fit are equal in terms of performance, it will also render next fit pointless. when we organise the lists in this way, memory allocation gets a bump in performance.
click here to go back to more