Jump to content

For You Believers in DELL


Parsad

Recommended Posts

  • Replies 285
  • Created
  • Last Reply

Top Posters In This Topic

Well all the apps can be run on ARM. So right now while you may go out looking for an X86 server or virtual machine instance to run your application. Eventually you will look for a an app container (this already exists with things like heroku, google app engine, and dotcloud) you could care less what is under the cover. It could be a virtual machine at amazon, it could be your app running on an ARM core somewhere. All you know if you write the app and upload it somewhere and it runs.

But the thing is that the software and hardware are related.

 

If you want your software to run fast... then you HAVE to pay attention to the underlying hardware and write for particular hardware.

 

Google has a good paper on wimpy cores versus brawny cores.  If you use wimpy cores, you may need to write your software a certain way.  Usually it is better to throw more hardware rather than more programmers at the problem.  Programmers are usually more expensive.

http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/fr//pubs/archive/36448.pdf

 

ARM-based products such as Tilera and Calxeda, to my understanding, are wimpy cores taken to the extreme.  Wimpy cores aren't optimal for most uses.  Tilera and Calxeda may or may not fill a niche.  Intel (Centerton/Moonshot) and AMD (Seamicro) are also going to attack that niche.

 

At this point I think this discussion is probably off topic to the original thread and might be good to start a new one. Thoughts?

I don't know what the proper forum etiquette for this is.

Link to comment
Share on other sites

Guest valueInv

What are you seeing in the private vs public cloud?

 

Should we start a new thread for this topic? Not sure what the protocol is on the board with regards to what is or isn't off topic for a thread.

 

I think this question is very relevant to Dell. If you look at the large public cloud vendors like Google or Amazon, they use cheap custom hardware. Their software stacks consist of stitched together open source and proprietary software. They are going to buy very little software, services or hardware from Dell. Effectively, Dell gets locked out. So if a future customer buys SFDC licenses instead of implementing some part of it in a private  cloud and if SFDC hosts it on AWS, there are very few dollars up for grabs for companies like Dell.

 

If on the other hand, private clouds form a big portion of spending, enterprises need to buy a lot of hardware, software, services. Then Dell has a bigger play. The question then is, will they be able to compete against IBM and Oracle?

 

Ok, random stream of thoughts on this subject...

 

So clearly public cloud is a big deal. Amazon web services which is hidden under "other income" on their income statements is over 1 Billion now. One of these days I will start a thread on to value Amazon's AWS business properly, because I am trying to figure out what the IV of that part of their business is.

 

Interest in private cloud ATM is big, I can't provide a real number because I don't know but maybe Gartner has an estimate somewhere. Companies want the ability to have an AWS like service behind their firewall. IMO the reason is not always because they want to buy cheaper hardware, its because of operational efficiency. Developers go to amazon and turn on 20 servers in 1 minute, they get excited about how much easier it can make their job. But they can't do their day to day work there because company policy prohibits it. No traditional company wants their IP or customer data on a public cloud. Even though the reality might be that Amazon's IT security folks are better skilled than theirs.

 

IMO a fortune 500 company would pay normal prices for Dell or HPQ hardware and pay for services on top of that to have a private cloud because it would increase efficiency so much. You need a server to do some testing? You need 30? Well instead of going through the procurement process and waiting 3 months or more you can have it right now. IT can bill your department internally based on long you used those instances. You are spinning up these instances on a large pool of resources that are centrally manage by IT but you don't even have to talk to them to get your work done. They just setup a quota on your account for how many instances you can use. I think for Enterprise companies the cost savings in just getting "more work done faster" is a huge value by itself. Plus they already have IT staff and data center staff that they have to pay for their legacy systems, so migrating to a public cloud does not save then OPEX unless they move ALOT of stuff and lay off some IT folks shut down data centers.

 

Where companies like Dell and HPQ play into this is by helping customer get it done. Currently Dell helps companies build private clouds using software called Openstack (http://openstack.org). Openstack is essentially an open source clone of amazon web services. It does not do everything AWS does, but its under very active development on a 6 month release cycle and is getting better all the time. Dig around the website and look at the companies who have developers working on openstack. You will see the usual suspects, IBM, HPQ, DELL, RAX, RHT and many smaller companies.  The Openstack project was created by Rackspace and NASA, they were both working on private cloud software internally and decided to opensource their respective pieces.

 

  Dell's cloud website:

 

  http://content.dell.com/us/en/enterprise/cloud-computing

 

  Dell also has software tool they built to help provision openstack based clouds called crowbar: http://content.dell.com/us/en/gen/d/cloud-computing/crowbar-software-framework

 

  HP appears to be focusing more on building it's own public cloud: https://www.hpcloud.com/ 

 

When looking at companies like DELL and HPQ I don't think commodity servers are the end of their enterprise IT business and here is why...

 

Google, Amazon and Facebook hire the brightest and the best and pay them handsomely to build solutions that allow them to utilize cheap hardware at scale. They had to build their own solutions because not even a 747 full of IBM consultants could do this for them, granted they would try and rack up the billable hours, three years later they would have something that doesn't work. I know a bunch of engineers that work at all of the companies I just mentioned and I can tell you what they do currently can't be done by normal companies. It requires a technology skill set and operational agility that can't be done at even the largest non-tech companies.

 

Companies that are not in the large scale computing business want stuff that works and people to help them when its 3:00AM and stuff is down for some unknown reason. Data centers is not their competitive advantage so they are not going to innovate there. Instead they want something that can be run by someone with an average IT skill set and a support contract as a kicker in case stuff really goes wrong.

 

So in short while there are tons of changes coming down the road in IT/Cloud/Servers/Networking, I still think customers want services, solutions and support. I think Dell has some real opportunities there. But in the end my enthusiasm is mainly based on the beaten down stock price and it goes up as the price goes down. I just read that GS put Dell on the sell block with a $9 price target, music to my ears :)

 

There are multiple factors at play here:

 

1, There is a SAAS trend. There are more and more offerings out there that are SAAS versions of previously in-house implementations. SFDC, RightNow, WorkDay, Zuora, Box.net, Yammer, etc are targeting different applications. Many of these are likely to be on public clouds or have public cloud like characteristics. So the trend of applications being hosted on public clouds may be driven from the adoption of SAAS. What is the cost differential between implementing an application on a private cloud vs buying SAAS licenses? ARe they even in the same order of magnitude?

 

2, There are 3 possible scenarios:

  a, Enterprises switch directly to public clouds.

  b, Private clouds are a stop gap on the transition to a public cloud world. Then, the question becomes for how long?

  c, We reach a steady mix between private and public clouds quickly. Then, the question is what is the mix?

 

I think a is unlikely because the IT departments are going to push back - they lose control, their budgets and their departments get slashed. In either case, there is some demand destruction for the kind of products that Dell makes or acquires. The question is how much and how does it effect Dell's ability to maintain prices. What will industry GMs be if server demand goes down by say 10%?

 

I doubt the public cloud trend will be driven by CIOs but there are other stakeholders and forces that are more favorable to it. The consumerization of IT players like Box.net have set up their sales channels to bypass the IT departments. I'm sure CIOs very excited about private clouds. But like BYOD, they may not get to make the call.

 

 

 

Link to comment
Share on other sites

But the thing is that the software and hardware are related.

 

If you want your software to run fast... then you HAVE to pay attention to the underlying hardware and write for particular hardware.

 

Google has a good paper on wimpy cores versus brawny cores.  If you use wimpy cores, you may need to write your software a certain way.  Usually it is better to throw more hardware rather than more programmers at the problem.  Programmers are usually more expensive.

http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/fr//pubs/archive/36448.pdf

 

Sure, that paper is probably right. Whimpy cores (ARM) are not as good as Brawny cores (x86) for all things. But keep in mind that if performance were the issue people would not be flocking to virtual machines like they are. Virtualization increases latency and has all sorts of nasty performance impacts. The biggest of which tends to be overhead on storage IO. People still use the hell out of it though because performance is not everyone's priority or the performance is good enough to meet their needs.

 

Go back to the 90s and you had very similar things being said about the prevalent RISC CPUs at the time in response to the much cheaper Intel stuff. People from SUN and DEC with would talk about how their architectures are what get the real work done and Windows or Linux on x86 for server applications was never going to happen because their architecture was superior. There were right in that architecture may have been better at the time but the cheap CPUs got better and people adapted their software stacks overtime to better utilize the cheaper hardware.

 

I think the whimpy CPUs are in the same boat. Keep in mind ARM has been evolving for a quite a while a now, since the 80s (http://en.wikipedia.org/wiki/ARM_architecture). It will continue to evolve.

 

And it looks like Facebook is a fan whimpy cores:

 

http://gigaom.com/cloud/facebook-tilera/  2011

http://www.eetimes.com/electronics-news/4375880/Facebook-likes-wimpy-cores--CPU-subscriptions  2012

 

ARM-based products such as Tilera and Calxeda, to my understanding, are wimpy cores taken to the extreme.  Wimpy cores aren't optimal for most uses.  Tilera and Calxeda may or may not fill a niche.  Intel (Centerton/Moonshot) and AMD (Seamicro) are also going to attack that niche.

 

It depends on what your optimizing for IMO. This argument that software is more expensive to develop vs buying more hardware only works up to a point. These data center companies are operating at such huge scales that I believe if the cost savings is high enough optimizing the software to run on cheaper hardware makes sense.

 

Your right it could be somebody other ARM, Intel has their own product in this space. We are probably not going to see a shift in the Enterprise world for a quite a while. However, I don't whimpy CPUs will remain a niche market, I don't think some shortfall of a particular performance characteristic will prevent their adoption.

 

My 2 cents.

 

 

 

Link to comment
Share on other sites

But keep in mind that if performance were the issue people would not be flocking to virtual machines like they are.

As I understand it, virtualization's main advantage is that it lowers IT labour costs.

 

The Google paper covers the pros and cons of wimpy cores versus brawny cores.  Wimpy cores only have the advantage of power / lower cost.

 

These data center companies are operating at such huge scales that I believe if the cost savings is high enough optimizing the software to run on cheaper hardware makes sense.

Wimpy core hardware should actually cost more.  In terms of the silicon, you are using a much larger area of silicon that is designed to be more power efficient. 

 

Regardless... the point is that not all software can easily be optimized to run on wimpy cores.  Each task has to be completed in a reasonable amount of time (otherwise you have to go parallel and that can get really nasty; parallelization can also be inefficient), can't take up too much memory, etc.

 

Go back to the 90s and you had very similar things being said about the prevalent RISC CPUs at the time in response to the much cheaper Intel stuff.

As I understand it:

1- The RISC chips were faster.  Eventually it got to the point where the x86 overhead didn't matter anymore.

You can't just use cheaper hardware.  For supercomputing, performance does not scale linearly with more CPUs.  You want to start off with the fastest CPU in the first place.

 

2- For the mainframe market, the RISC chips had high availability / high reliability features (e.g. ECC RAM) that commodity Intel chips didn't.  Eventually, Intel started integrating high availability features into the Xeon.  In the long run, this will cannibalize sales of Itanium.

As I understand it, the existing solutions like Itanium, Power, SPARC will only slowly die off since customers can't be bothered with switching costs when they have systems that work.

 

----------------------------------------

It should be noted that Google is a very, very special case.  Their products are the largest applications of cloud technology out there (e.g. search, Youtube).  They have huge amounts of in-house cloud infrastructure technology that is suited for cloud applications on a ridiculous scale.  They made their own in-house file system... which they didn't even try to monetize or to make it open source (Hadoop is an open source project inspired by Google File System).  Now they are working on their next generation file system.

 

 

Link to comment
Share on other sites

As I understand it, virtualization's main advantage is that it lowers IT labour costs.

 

The Google paper covers the pros and cons of wimpy cores versus brawny cores.  Wimpy cores only have the advantage of power / lower cost.

 

Yes it does lower labor costs and in addition it comes at a performance penalty. In a lot of use cases the savings trumps the performance penalties. I think you will see a similar shift in the brawny vs whimpy core argument. The place where you save money with wimpy cores is different than with virtualization, but the pattern is similar IMO. Pattern being, when there is a good way to reduce costs in competitive industries these shifts happen. You made the point earlier that it is usually cheaper to buy more hardware than change software, my belief is that Google and Facebook operate at scale where it will make sense to change the software to run optimally on hardware that allows higher density @ lower cost. For a run of the mill enterprise buying another Dell server is cheaper than paying a software engineer to optimize the software. But we are talking about huge scale that is only getting bigger.

 

Wimpy core hardware should actually cost more.  In terms of the silicon, you are using a much larger area of silicon that is designed to be more power efficient.

 

But you are talking cost of the physical chips correct? I was referring to the cost of energy, cooling, rack space in the data center etc...

 

Regardless... the point is that not all software can easily be optimized to run on wimpy cores.  Each task has to be completed in a reasonable amount of time (otherwise you have to go parallel and that can get really nasty; parallelization can also be inefficient), can't take up too much memory, etc.

 

I agree that not all software can be easily parallelized, but that does not mean it won't happen.

 

One thing that is happening currently is that there is huge interest in the cloud space in functional programming languages, one reason for this is that they are built in such a way that programming for parallel systems is much easier.

 

This issue is not unique to wimpy cores, you have people building big data applications on EC2 with 100s or 1000s of virtual machines they are facing similar issues but not identical. Software is heading towards parallelization, its not an easy problem to solve but what ever is?

 

I guess my point here is that I see trends in both the software side and hardware side that make me believe we are going to see a growing number of high density whimpy core based systems in use. Your arguments for why this may not happen at a large scale are because performance is important and rewriting software is expensive. My response is that I think people working on these problems are reaching scales where, if they can parallelize across lots of wimpy cores, it is cheaper to retool, rewrite software to utilize these systems optimally.

 

This is how these things work right? You build a system that exceeds your capacity expectations, then it exceeds your expectations so you optimize it meet your growing needs knowing eventually you will need to rebuild it. Then you design and build a new system bring it online and eventually after a long time the old one goes away. Or in the cloud model you start adding more racks of newer gear running newer software handling the same workloads and as the old racks die off you just replace them with the new ones.

 

One example of that is Facebook using Tilera initially for some of the work that they know can be easily done on whimpy cores and then investigating other places they can use it.

 

As I understand it:

1- The RISC chips were faster.  Eventually it got to the point where the x86 overhead didn't matter anymore.

You can't just use cheaper hardware.  For supercomputing, performance does not scale linearly with more CPUs.  You want to start off with the fastest CPU in the first place.

 

2- For the mainframe market, the RISC chips had high availability / high reliability features (e.g. ECC RAM) that commodity Intel chips didn't.  Eventually, Intel started integrating high availability features into the Xeon.  In the long run, this will cannibalize sales of Itanium.

As I understand it, the existing solutions like Itanium, Power, SPARC will only slowly die off since customers can't be bothered with switching costs when they have systems that work.

 

Yes those are all accurate statements. I guess a better way to make my point my be this... I am willing to bet someone smart wrote a paper at some point about RISC vs CISC and how using CISC/Intel systems was not an option for one reason or another. Whatever the reasons were this mythical paper I just made up was wrong in the long run because the product evolved to meet the needs.

 

That is the analogy I was trying draw, *not* that comparing ARM to X86 on a architecture level is similar to comparing X86 to Sparc or Alpha. I am totally willing to accept that this response is BS because I don't have a paper to cite. But I like to look at patterns that have occurred in the past where people say something is going one way or the other, then look at the present to try and find similar patterns.

 

In this case is the pattern was experts assuring me that UltraSparc or DEC Alphas are here to stay when in fact they were pretty much dead wrong, in comparison to people saying today that low power cores are not going to be able to replace Intel or AMD x86 based servers. As I mentioned not apples to apples comparisons but similar patterns.

 

----------------------------------------

It should be noted that Google is a very, very special case.  Their products are the largest applications of cloud technology out there (e.g. search, Youtube).  They have huge amounts of in-house cloud infrastructure technology that is suited for cloud applications on a ridiculous scale.  They made their own in-house file system... which they didn't even try to monetize or to make it open source (Hadoop is an open source project inspired by Google File System).  Now they are working on their next generation file system.

 

Acknowledged.  Apologies for any typos, was in a little bit of rush. To be honest I am not entirely sure at this point if we are debating ARM or wimpy cores impact on Dell and Enterprise or its impact to the Googles and Facebooks of the world. Either way, good conversation. I think it started with the former and ended up on the later.

Link to comment
Share on other sites

But you are talking cost of the physical chips correct?

Yes.

 

But I like to look at patterns that have occurred in the past where people say something is going one way or the other, then look at the present to try and find similar patterns.

I guess it comes down to predicting what will happen in the future.  For tech companies, I think that this is incredibly difficult most of the time.

 

2- I think that the whole wimpy core way of thinking came about because processors have been trending towards higher power consumption.  I think that everybody will simply adapt.  Intel will have a mix of products.  Some of them will target various niches in the server market.  So will AMD and various startups and ARM and RISC chip manufacturers.

 

I don't see an inflection point where ARM kills off Intel and AMD in the server market.  If anything, it might be the other way around.  The historical trend has been commodity hardware killing off lower-volume specialized hardware.  Server chips by these companies aren't entirely commodity consumer products (they have server features like ECC, hardware virtualization support, etc.).  But you might expect "mass-market" server chips (e.g. Intel Centerton) to kill off low-volume specialized chips (e.g. Tilera, Calxeda).  So that's my prediction.

 

Of course it is possible that some startup has some radical new approach that totally changes the industry... e.g. radically different transistor technology, quantum computing, etc.

 

One thing that is happening currently is that there is huge interest in the cloud space in functional programming languages, one reason for this is that they are built in such a way that programming for parallel systems is much easier.

One of the hotter programming languages would be Ruby on Rails.  Its appeal is mainly that it lowers development time... whereas something like C++ would enable really high levels of performance and efficient use of hardware.  The programmers only want to do really easy parallelization (e.g. each user of a cloud service is handled by a thread) and to stay away from hard parallelization. 

 

The historical trend has been for programming to go higher level... less development time, less efficient use of hardware (because hardware just gets faster and faster).  Even in photo editing where performance can be better, many applications aren't written for parallel processing (yet) even though it is ideal for parallel processing and easy to do.  The programmers were too busy adding new features.

 

-----------------------

In general, the history of computers is that they keep getting more powerful and we keep finding new uses for them.  So Dell should benefit a little bit from that.

 

As far as disruptive changes or inflection points go, I don't see Dell being hurt by ARM-based servers or Google-style cloud computing.  Google is a special case where their hardware and data centers are highly specialized.  Most companies out there don't have the scale where that would make sense.

Dell could be hurt by companies which wholesale (essentially) a fraction of a data center.  Normally you could buy a Dell server and go to a colocation company such as Peer1 or Rackspace.  Instead, you can go to Amazon and use their cloud services.  Wikipedia lists some of Amazon's cloud customers (e.g. Dropbox; I highly recommend this free/paid product).

 

-Part of Dell's business is in designing hardware (e.g. PCs, servers).  Historically this is not a great business because high returns of equity don't last.

-Part of Dell's business is in retailing/distribution.  This used to be a great business, but competition is eroding margins.

-Dell is getting into software where margins can be higher.  The margins are higher because these businesses are very difficult to duplicate.  And of course these businesses are vulnerable to disruptive changes... your software might be killed by somebody else's superior software. 

-Dell is also getting into services.  I don't understand that business well.

Link to comment
Share on other sites

But you are talking cost of the physical chips correct?

Yes.

 

But I like to look at patterns that have occurred in the past where people say something is going one way or the other, then look at the present to try and find similar patterns.

I guess it comes down to predicting what will happen in the future.  For tech companies, I think that this is incredibly difficult most of the time.

 

2- I think that the whole wimpy core way of thinking came about because processors have been trending towards higher power consumption.  I think that everybody will simply adapt.  Intel will have a mix of products.  Some of them will target various niches in the server market.  So will AMD and various startups and ARM and RISC chip manufacturers.

 

I don't see an inflection point where ARM kills off Intel and AMD in the server market.  If anything, it might be the other way around.  The historical trend has been commodity hardware killing off lower-volume specialized hardware.  Server chips by these companies aren't entirely commodity consumer products (they have server features like ECC, hardware virtualization support, etc.).  But you might expect "mass-market" server chips (e.g. Intel Centerton) to kill off low-volume specialized chips (e.g. Tilera, Calxeda).  So that's my prediction.

 

Of course it is possible that some startup has some radical new approach that totally changes the industry... e.g. radically different transistor technology, quantum computing, etc.

 

Well I never said that ARM would kill Intel or AMD, I think my initial point was that these technologies could pose potential threats companies like Dell in the cloud space if they get large adoption. When you presented the paper about wimpy vs brawny I felt you were implying that wimpy cores don't stand a chance and I think they will have a high probability of having pretty big place in the future. It could be a product from Intel like a newer Atom it could be ARM it could be both or could something completely different. I don't see Intel going away any time soon either.

 

One thing that is happening currently is that there is huge interest in the cloud space in functional programming languages, one reason for this is that they are built in such a way that programming for parallel systems is much easier.

One of the hotter programming languages would be Ruby on Rails.  Its appeal is mainly that it lowers development time... whereas something like C++ would enable really high levels of performance and efficient use of hardware.  The programmers only want to do really easy parallelization (e.g. each user of a cloud service is handled by a thread) and to stay away from hard parallelization.

 

The historical trend has been for programming to go higher level... less development time, less efficient use of hardware (because hardware just gets faster and faster).  Even in photo editing where performance can be better, many applications aren't written for parallel processing (yet) even though it is ideal for parallel processing and easy to do.  The programmers were too busy adding new features.

 

Familiar with Ruby and the Rails framework but that was not what I was thinking of. I was actually referring to are functional programming languages which are not designed around the Von Neumann architecture like the imperative languages that are in main stream use today (c, c++, java, ruby C#).  Scala is a newer functional language that runs in the Java Virtual Machine that among other things is attempting to make it easier to deal with parallelism and concurrency. Here is a good video (16 minutes) to watch if you are interested:

 

http://www.oscon.com/oscon2011/public/schedule/detail/21055

 

-----------------------

In general, the history of computers is that they keep getting more powerful and we keep finding new uses for them.  So Dell should benefit a little bit from that.

 

As far as disruptive changes or inflection points go, I don't see Dell being hurt by ARM-based servers or Google-style cloud computing.  Google is a special case where their hardware and data centers are highly specialized.  Most companies out there don't have the scale where that would make sense.

Dell could be hurt by companies which wholesale (essentially) a fraction of a data center.  Normally you could buy a Dell server and go to a colocation company such as Peer1 or Rackspace.  Instead, you can go to Amazon and use their cloud services.  Wikipedia lists some of their customers (e.g. Dropbox; I highly recommend this free/paid product).

 

-Part of Dell's business is in designing hardware (e.g. PCs, servers).  Historically this is not a great business because high returns of equity don't last.

-Part of Dell's business is in retailing/distribution.  This used to be a great business, but competition is eroding margins.

-Dell is getting into software where margins can be higher.  The margins are higher because these businesses are very difficult to duplicate.  And of course these businesses are vulnerable to disruptive changes... your software might be killed by somebody else's superior software. 

-Dell is also getting into services.  I don't understand that business well.

 

Yup. I guess we will see where everything is in 10 years. Good chance its different than either of use expect :) That being said I am still long DELL.

 

 

Link to comment
Share on other sites

Guest valueInv

 

One thing that is happening currently is that there is huge interest in the cloud space in functional programming languages, one reason for this is that they are built in such a way that programming for parallel systems is much easier.

One of the hotter programming languages would be Ruby on Rails.  Its appeal is mainly that it lowers development time... whereas something like C++ would enable really high levels of performance and efficient use of hardware.  The programmers only want to do really easy parallelization (e.g. each user of a cloud service is handled by a thread) and to stay away from hard parallelization.

 

The historical trend has been for programming to go higher level... less development time, less efficient use of hardware (because hardware just gets faster and faster).  Even in photo editing where performance can be better, many applications aren't written for parallel processing (yet) even though it is ideal for parallel processing and easy to do.  The programmers were too busy adding new features.

 

Familiar with Ruby and the Rails framework but that was not what I was thinking of. I was actually referring to are functional programming languages which are not designed around the Von Neumann architecture like the imperative languages that are in main stream use today (c, c++, java, ruby C#).  Scala is a newer functional language that runs in the Java Virtual Machine that among other things is attempting to make it easier to deal with parallelism and concurrency. Here is a good video (16 minutes) to watch if you are interested:

 

http://www.oscon.com/oscon2011/public/schedule/detail/21055

Ruby on Rails is not a programming language but rather a web development framework - think of it like a lightweight version of the old IBM Websphere product. Ruby is the underlying programming language. RoR is already being upended by Node.js which is based on Javascript and uses a functional programming style (although the Javascript language  does not require it). Node.js is extremely fast and lightweight and has tremendous mainstream momentum behind it. However, it is single threaded and AFAIK does not parallelize out of the box.

 

Scala is being used for big data, compute intensive applications. Although there is a lot of interest in it, I'm not sure how suited it is for general purpose, mainstream applications.

Link to comment
Share on other sites

Ruby on Rails is not a programming language but rather a web development framework - think of it like a lightweight version of the old IBM Websphere product. Ruby is the underlying programming language. RoR is already being upended by Node.js which is based on Javascript and uses a functional programming style (although the Javascript language  does not require it). Node.js is extremely fast and lightweight and has tremendous mainstream momentum behind it. However, it is single threaded and AFAIK does not parallelize out of the box.

 

Scala is being used for big data, compute intensive applications. Although there is a lot of interest in it, I'm not sure how suited it is for general purpose, mainstream applications.

 

valueInv you seem to have messed up the quoting. Kind of fogging up who said what, not a big deal but it kind looks like I said Rails was a hot new language, which I did not. Yes Rails is a framework which I said in the post you are responding to. Yes node.js is gaining in popularity. My point with bringing up Scala (and why I did not bring up node.js) is that ItsAValueTrap had commented on how dealing with things like parallelism are hard and IMO Scala is an example of trying to make that easier for developers to deal with hence increasing the likely hood that things can be parallelized with out the developers having to deal with things like locking and threads.

 

On a slightly different note...

 

Gents, I am going to bail on this thread now. While the software/cloud discussion has been fun I feel like we have gone off the rails (no pun intended!) a bit and would like to keep my time on this board more focused on investing. Not saying there is no relationship between the two, but I talk quite a bit about software at work (and write software too) and would like to try and focus my forum time on things  I don't get to do other places. It has been a great conversation though. See you on Dell thread in the Investment Idea section and good luck to all. ValueInv sorry I never got a chance to comment on Dell and how they will handle competition from Oracle and IBM, maybe later.

 

 

 

Link to comment
Share on other sites

  • 3 weeks later...

Looks like a good buying opportunity will present itself on Monday:

 

http://online.barrons.com/article/SB50001424053111904034104578058751530288678.html?mod=BOL_hpp_cover#articleTabs_article%3D0

 

When Microsoft introduces its long-awaited Windows 8 operating system Friday, it will be the first Windows rollout to face real competition since, well, forever. Today, smartphones and tablets do almost all of the day-to-day tasks a PC does
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...