Power Developer
https://powerdeveloper.org/forums/

CONFIG_AUDITSYSCALL support
https://powerdeveloper.org/forums/viewtopic.php?f=7&t=2264
Page 1 of 1

Author:  tokenrove [ Sun May 27, 2012 11:29 am ]
Post subject:  CONFIG_AUDITSYSCALL support

I'd like to be able to use a tool like e4rat to optimize the boot process, but it requires a kernel with audit and audit syscall support. I see this was added for ARM some time after 2.6.31, which is the kernel I'm using (linux-image-2.6.31.14.27-efikamx). My understanding of the different kernels available is that if I move to the more modern Linaro kernel, I lose some of the Efika MX specific support. Would it be feasible to backport audit syscall support to the supported kernel, or is there a plan for soon modernizing the supported kernel version?

Thanks.

Author:  wschaub [ Sun May 27, 2012 2:25 pm ]
Post subject: 

We are currently working on a 3.x kernel but it won't be released until it is ready. Kernel hacking isn't my department though so I will leave the rest of the details for someone else to answer.

Author:  jakobcreutzfeldt [ Tue May 29, 2012 3:32 am ]
Post subject: 

It's worth noting that e4rat predominantly benefits users with traditional hard-disk drives. SSDs probably won't benefit from it since they don't have much access latency. So, it's questionable how much the boot process of an Efika MX will be optimized by e4rat...

Author:  Neko [ Wed Jun 13, 2012 1:51 pm ]
Post subject: 

Quote:
It's worth noting that e4rat predominantly benefits users with traditional hard-disk drives. SSDs probably won't benefit from it since they don't have much access latency. So, it's questionable how much the boot process of an Efika MX will be optimized by e4rat...
It's actually still quite significant because it optimizes the file layout based on the kernel read-ahead behavior. If you imagine the process as follows;

App loads 512 bytes of a file. Kernel reads ahead 16k and buffers it.

App loads 512 bytes of a file which is coincidentally offset into the disk 32k from the first one- kernel cache misses it, so it reads that offset and the kernel reads ahead 16k and buffers it again.

In fact you could take the shorter route, which is to put all the possible accesses within the same 16k, in the right order or close to it, such that every disk access stops being a roundtrip from VFS to cache to SCSI to ATA to Flash controller, and just stop at the cache.

This takes some noticable CPU and IO time, since even an SSD access has some associated latency far and away much larger than an SDRAM access - not nearly as much as a hard disk (where a hard disk is anywhere from 2ms to 200ms, SSD would be anywhere between 20us to 200us depending on the operation) - but still noticable compared to simply looking up a cache entry which is probably in the 100-200ns range (with these values, AT LEAST 200x faster, but we also don't count the userspace->kernel switch time which may again be 100-200ns, maybe a little higher. So let's say somewhere between 50x and 100x?)

Just to summarize in a rather naive way; hard disk latency is measured in tends of milliseconds (1/1000th of a second). SSD latency is measured in the tens of microseconds (1/1000000th of a second). SDRAM latency is measured in the tens of nanoseconds (1/1000000000). Most RISC processors do one instruction per clock, and can fetch from L1 or sometimes L2 cache in one clock (1 nanosecond at 1GHz). For every tip to the next subsystem down the chain, up to switching your SSD back to a standard hard disk, you are increasing the time taken, not including processing, by 1000 just on access time alone. If you include the abstractions, processing time, sorting, buffering, transfer rate, you are still getting a rather significant gain. Even if it's not a million times faster as the theory might suggest, 20x more efficient data access can lead to a 3x faster boot (as per e4rat's bootcharts), which makes your 45 second wait for the login manager into a 12 second one. Imagine what you can do if you have a 5 second boot already...

Probably shave it down to 4.2 seconds, but you get the idea. It's still a significant number of CPU cycles saved if you're counting it in NANO-seconds :)

Author:  jakobcreutzfeldt [ Thu Jun 14, 2012 3:20 am ]
Post subject: 

Ah ok, thanks for the in-depth explanation! I was under the impression that e4rat primarily worked by overcoming the limitations imposed by the spatial arrangement of data on a traditional hard-drive and the mechanical means by which it must be accessed whereas, for all intents and purposes, access from any location on an SSD has about the same latency as any other location. But, it looks like my chief misjudgment was in that "for all intents and purposes" statement and that indeed, CPU caching will also have an effect.

I stand corrected. :) So then, I too am interested e4rat for my Efika MX!

Page 1 of 1 All times are UTC-06:00
Powered by phpBB® Forum Software © phpBB Group
http://www.phpbb.com/