The Quantum Graph

Month: May, 2012

What is QC?

Non-physicists often have the mistaken idea that quantum mechanics is hard. Unfortunately, many physicists have done nothing to correct that idea. But in newer textbooks, courses, and survey articles, the truth is starting to come out: if you wish to understand the central ‘paradoxes’ of quantum mechanics, together with almost the entire body of research on quantum information and computing, then you do not need to know anything about wave-particle duality, ultraviolet catastrophes, Planck’s constant, atomic spectra, boson-fermion statistics, or even Schrödinger’s equation.

Aaronson 2004 p.23

‘Quantum computing’ sounds as exotic to physicists and computer scientists as it does to laypeople. The standard definition is that this is a field devoted to applying findings from the physics of quantum mechanics to computing in general. What is quantum mechanics? A mystical and apparently accurate description of the Cosmos that shows everything as ultimately everywhere at once. A lapis philosophorum.

Being fringey, quantum is mysterious. Yet, the mostly established assumption is that QC is an impossible task for a ‘classical’ computer/Turing machine – such as the device you’re reading this with. It requires intelligent management of exponentially complex resources. In other words, good old fashioned software is not capable of instructing a computer to carry out QC efficiently.

Thus classical implementation of QC is impractical and in order to overcome the problems, research facilities have started building exotic processors. This new class of hardware represents the beginnings of a fresh generation of technology. As it becomes scalable, we are left with the question: what will we do with all the obsolete classical hardware (laptops, desktops, servers, phones, satellites… ) we’ll have lying around?

Noospheer is engaged in the research/application of QC to permissive-license software which can run on any computer and faithfully reproduce live quantum systems — we’re attempting to successfully defy the assumption. This with goals of improving database search speeds, privacy and energy efficiency.

We’ll publish when ready for all the hackers.


The Raw Data Market

One of the key dilemmas noospheer has faced as a startup is coming up with a fair, sustainable and scaling revenue model. Licensing fees don’t work because we’re open source + we simply don’t want to charge. Advertising doesn’t work because ads take up valuable pixels. Our model should be uniquely enabled by open technology.

Recently, when describing the project to a friend who works in private equity, she nonchalantly formed the basis of a new model. Until now noospheer has focused on the black and white: private data and public data. Users (individuals and organizations) can either keep data closed or open access. But what about users such as corporations with high-value proprietary information? Realism says company X with data set Y will simply keep Y under lock and key. But as corporations are organisms of profit, perhaps if X could sell Y for Z, they would be willing to do so.

Gartner says the business intelligence market is huge and growing. The average 20 page market research package runs for around $5,000. That comes with a few tables in a pdf – not reams of structured, visualized data. Simply, noospheer is free to access, download + host, share, extend and modify. But if one decides to charge for access to information through the network, we’ll take a cut: 3%.

As highlighted in the previous post, noospheer is an open data platform. Once the system is online, users are encouraged to their spread data – especially scholarly data – free of charge. Yet as the world is primarily dictated by large, proprietary-paradigm institutions, we play the corporate game and provide an option to charge for data set access. Should the market incorporate the noospheer model, it can fiscally enable our company (Noo Corp) as an open force for change… Stay tuned.