Frogboy Frogboy

64-bit Windows is better than getting an SSD if you have the RAM

64-bit Windows is better than getting an SSD if you have the RAM

I wrote this over on my neowin blog:

http://www.neowin.net/forum/index.php?automodule=blog&blogid=8&showentry=2280

Bottom line: Given the choice between an SSD or running 64-bit with 8 gigs of RAM, get the extra RAM.

207,287 views 59 replies
Reply #51 Top

The trick with multi-core systems is that the SOFTWARE needs to be written multi-threaded to take advantage of it.

This part is true. An application takes a single core unless it's designed with several threads.

I should point out, however, that having dual cores does help responsiveness a bit since it can spread the tasks between cores. This has diminishing returns as the number of cores increases, though. The best way is to use applications designed for multiple cores.

Therefore, for most people 2 cores is plenty as one core is busy with you currently active program while the other is handling OS background tasks, widgets, your music player, etc.

Well, it can theoretically work like this, yes, but realistically most OSes do things a bit differently. You CAN do this by setting the affinity or by using software that sets affinity, but most OSes simply try to level out the CPU usage and pay more attention to how much CPU an application is taking rather than whether it's in the foreground or not.

BTW, you'll find that 99% of windows programs are single-threaded.

You'd be surprised how many applications are multithreaded . . .

It might be more accurate to say that 99% of applications are not designed to use cores evenly or take full advantage of them. They mostly throw threads at the CPU hoping the OS will take care of distributing them.

Even without cores, threads have long been used to do things like keeping the GUI on an application responsive while other parts of the application wait for resources. This type of threading, though, isn't designed with multiple cores in mind, although it will work.

Servers have also long had multiple CPU/Core systems because they have to manage many clients simultaneously. Perhaps we are opening the floodgates for people running their own personal servers? With the big push for cloud computing, I wouldn't be surprised if more people started wanting to run the server side of things as well and not just the client side of things.

It's an event that seems to repeat itself throughout computer history: Something starts in big businesses, on large servers and mainframes. But it gets minaturized and put into PCs, and before we know it everybody is doing it in a small, inexpensive, and affordable box. It's happened before, and it'll happen again. If you want to see what people will be doing in their homes a few years from now, look at what the businesses are doing with large server racks right now. Because a few years from now, a PC will be as powerful as all of those racks. A few years more, and your phone will be that powerful as well.

This whole Software as a Service thing? It'll last until people start realizing they can host their own software. Next thing you know, they'll want to own their own software so they can host it for themselves. Geeze, where have I seen that before?

History will, as always, repeat itself.

Those of us who are using 4 or 8 core machines are doing so because we have high end 3D software or video editing software, for example, that actually can peg all of our cores to the maximum when rendering.

I think that increasingly gaming will also take advantage of multiple cores as well. A lot of tasks in games are well suited for parallelism. Physics and AI come to mind. Graphics would also work, although that is usually given to the GPU.

I also think that we'll be finding more uses for multiple cores - in computer science there is a lot of theory about problems that may be better suited for parallel machines, and some programming languages such as Smalltalk and Erlang use paradigms that are well suited for parallelism. It's possible that there's some killer application lurking around somewhere.

Reply #52 Top

BUG: Next page doesn't appear until it has two posts . . .

+1 Loading…
Reply #53 Top

Quoting CobraA1, reply 24

I have disabled disk caching completly and it works great.
Which cache? Completely disabling the OS disk cache would mean that any application that writes to disk will pause while the write is taking place. It would be painfully slow. I'm guessing you just turned off SuperFetch. I don't think you completely disabled caching.

I did not mean I disabled superfetch, although I do have that disabled as well, but that is not what I meant. Superfetch in over simplified terms is pre-loading of commonly used programs (determined by stats collected by O/S) into memory (RAM) when the O/S starts up or paging file depending on hardware and software config. Maybe you thought I meant write caching on disk?

I meant that I disabled disk caching in the sense of caching files to hard diskI wasn't refering to disk caches so much as I was referring to caching of files to disk (if that makes sense? lol probably not, I stayed up too late playing Sins!).  Example : I have disabled the paging file on the hard disk and readyboost, there was one I other maybe but I don't remember the name hehehe.


Also if you check the specs on most SSD their seek time and read is actually slower than the 10,000 RPM raptors.
The fastest Raptor is 4.2 ms, and SSDs are in the 0.02 ms range. They have no moving parts, which means they don't have to wait for a head to move.

And they don't have to wait for drive spinnup... sorry I switched my wording around on that, thanks for catching it (lol what was I thinking)



And don't believe the "3 Gb/s" listed on any mechanical drive. That's the speed of the bus from the buffer, not the speed the platters can actually read. A drive with a 16 MB cache can maintain 3 GB/s for 0.04 seconds. After that, it's much slower.

Well none of the standard raptors even bother with SATA 3 (3 Gb/S) as the manufactuer was smart and knew they wouldn't use up the interfaces capacity of SATA II. (Except the Veloci-Raptor which apparently does have SATA 3) but again that does even need the use of SATA 3.


Unless you're editing video, drive speeds are rather moot in my opinion. For me a disk is just permanent storage, and I prefer to keep my software in RAM as much as possible for maximum performance.

I couldn't agree more, but I got the raptors so my game load times for levels when it happens are much faster. There is a surprising amount of speed difference between loading on a 7200 and loading on the raptor which surprised me.

Overall O/S speed no change between harddrives, but disabling the paging file and forcing it all onto RAM pretty noticeable difference on everything.

Reply #54 Top

I'm gonna move to 64-bit, but it'll probably be Ubuntu.

Reply #55 Top

If the SSD trend catches on (er, when...not if...), I could see us throwing a lot less L2/L3 cache on the CPU's and putting on more cores instead.  The cache miss penalty just isn't that bad anymore.  Arbitrating all the cores' concurrent memory accesses almost costs more than the memory access itself.   This means, going forward, CPU cost shouldn't particularly go up even though they've got 8 cores.  

What we will probably NOT see, though, is integrated SSD directly on the CPU.  They use different manufacturing processes.  So the CPU will probably continue talking to the SSD at the PCB board-level.  It would be totally awesome if we could integrate persistent memory directly onto the CPU chip, because that would mean sub-1 nanosecond "hard-drive" seek times.  You do the math what that would mean.  But I don't think it's going to happen anytime soon. 

Reply #56 Top

most OSes simply try to level out the CPU usage and pay more attention to how much CPU an application is taking rather than whether it's in the foreground or not.

The affect is the same, however, since an active foreground application will be taking cycles on one core, which will cause the OS to load balance secondary operations on additional cores.  XP/Vista/7 do an excellent job of spreading the wealth around evenly.

It might be more accurate to say that 99% of applications are not designed to use cores evenly or take full advantage of them.

Of course.  I was presenting a simple layman's description without going all in.  :)

I think that increasingly gaming will also take advantage of multiple cores as well. A lot of tasks in games are well suited for parallelism. Physics and AI come to mind. Graphics would also work, although that is usually given to the GPU.

Agreed.  Games are the next big push in taking advantage of multiple cores.  In fact, today's games already are doing this.  The latest Unreal engine is highly threaded to take advantage of multiple CPUs and, of course, SLI technology addresses this for GPUs at the hardware level as well.

Reply #57 Top

All Stardock games are very multithreaded. It's what got me into programming in the first place really.

Our new game, Elemental, has a new graphics engine that is explicitly designed with multiple GPU/core setups.

Reply #59 Top

Quoting Lord, reply 16
I'da rather have Vista 64 with 2 64Gig SSD's in RAID. And not just any SSD mind you but Memowrite SSD's, they're pretty much the fastest you can get right now. And that would be just for the system drive.

My Ideal setup

2 64GB Memowrite SSD in RAID config for system.

2 Western digital Velociraptors in RAID (300GB each) for games.

2 Western Digital 1TB Greenpower drives in RAID for storage.

 

Yeah I know it's an expensive setup, but if money were not object, that's what I would do.

If money were no object, I would pay for a new type of motherboard to be developed and then buy 1TB of the best RAM and a nuclear generator to keep it all running.

Oh wait...