1934 vision of a global networks of computers

image

In 1934, Otlet sketched out plans for a global network of computers (or “electric telescopes,” as he called them) that would allow people to search and browse through millions of interlinked documents, images, audio and video files. He described how people would use the devices to send messages to one another, share files and even congregate in online social networks. He called the whole thing a “réseau,” which might be translated as “network” — or arguably, “web.”

Historians typically trace the origins of the World Wide Web through a lineage of Anglo-American inventors like Vannevar Bush, Doug Engelbart and Ted Nelson. But more than half a century before Tim Berners-Lee released the first Web browser in 1991, Otlet (pronounced ot-LAY) described a networked world where “anyone in his armchair would be able to contemplate the whole of creation.”

Although Otlet’s proto-Web relied on a patchwork of analogue technologies like index cards and telegraph machines, it nonetheless anticipated the hyperlinked structure of today’s Web. “This was a Steampunk version of hypertext,” said Kevin Kelly, former editor of Wired, who is writing a book about the future of technology.

Otlet’s vision hinged on the idea of a networked machine that joined documents using symbolic links. While that notion may seem obvious today, in 1934 it marked a conceptual breakthrough. “The hyperlink is one of the most underappreciated inventions of the last century,” Mr. Kelly said. “It will go down with radio in the pantheon of great inventions.”

full article

Experimental phone network uses virtual sticky notes

image

The rapid convergence of social networks, mobile phones and global positioning technology has given Duke University engineers the ability to create something they call “virtual sticky notes,” site-specific messages that people can leave for others to pick up on their mobile phones.

A team led by Roy Choudhury has developed a new software system that enables users to obtain location-specific, real-time information – either passively or directly – from other mobile phone users across the world. It will be as if every participating mobile phone works together, allowing each individual to access information throughout that virtual network.

Interested in trying that new Mexican restaurant? Tap into the virtual sticky notes floating in the ether within the restaurant and find what other network users thought of it.

Heading to the airport and need to know where the traffic jams are? Sensors in the phones detect movement and can relay back to the network where traffic is the heaviest.

The potential of this new application, which has been dubbed micro-blog, is practically limitless said Roy Choudhury.

“We can now think of mobile phones as a ‘virtual lens’ capable of focusing on the context surrounding it,” Roy Choudhury said. “By combining the lenses from all the active phones in the world today, it may be feasible to build an internet-based ‘virtual information telescope’ that enables a high-resolution view of the world in real time.”
The application combines the capabilities of distributed networks (like Wikipedia), social networks (Facebook), mobile phones, computer networks and geographic positioning capabilities, such as GPS or WiFi.

“Micro-blogs will provide unprecedented levels and amounts of information literally at your fingertips no matter where you are, through your mobile phone,” Roy Choudhury said. “We have already deployed a prototype, and while some challenges remain to be addressed, the feedback we’ve received so far indicates that micro-blog represents a promising new model for mobile social communication.”

Mobile phones are already more than just communications devices, Roy Choudhury points out. Increasingly they are coming equipped with cameras, GPS service, health monitors, and even accelerometers, devices which measure speed.

In simple terms, people who use the micro-blog application will enter information – photos, comments, videos – into their mobile phone, where it will be “tagged” by the user’s location. Passive information, such as location or speed, can also be recorded. All this information is then sent to a central server, where it is available to all participants.

“So if you’re planning a trip to the beach or a restaurant, you can query the micro-blog and get information or see images from people who have been or are currently there,” Roy Choudhury said. Another application consists of individual, localized pockets of information.
“Say you are in a museum,” Roy Choudhury said. “As you pass a particular painting, your phone could download comments from art experts providing relevant information about that painting.”

The current prototype works with the Nokia N95 mobile phone, but Roy Choudhury said the application will eventually be written for any kind of programmable mobile phone. He also believes that these, and other as-yet-to-be imagined applications, will be commonplace within five years.

However, Roy Choudhury said there are three solvable issues that still need some work. The first is the trade-off between precise geographic location and battery power.
“GPS, while it can be accurate down to several meters, can also drain a mobile phone battery in seven hours,” Roy Choudhury said. “On the other hand, WiFi and GSM technologies are widely available and don’t use as much energy, but they aren’t nearly as accurate. We believe that an approach blending these technologies will probably solve that problem.” (GSM is an international cell phone standard that enables global roaming.)

Another issue is more societal than technological: What kinds of incentives would inspire users to enter information when queried by strangers, since that would involve their time and battery power.

Finally, location privacy needs to be addressed. Since mobile phones are transmitting data – including location – back to a central server, users must trust the administrator to keep this information private. Roy Choudhury believes that these issues can be addressed by assigning different modes – private, social or public – much like social networks already do.

read full article

Chill out, your computer knows what’s best for you

Chill out, your computer knows what’s best for you

Computers are starting to become more human-centric, anticipating your needs and smoothly acting to meet them.

Much of the progress can be attributed to work done by European researchers.

The technologies the researchers developed leave humans free to concentrate on their real work instead of having to think about the computer and how to operate it.

The project set out to put human needs first in the development of new communications technologies. In other words, the researchers examined ways in which computers can serve humans better, rather than humans being subservient to the way computers are currently designed.

Focusing on humans

Rather than focusing on human-and-machine interaction, they set out to create a new paradigm of machine-supported human-to-human interaction. The researchers’ goal was to improve user satisfaction and worker productivity.

The researchers looked at communications and the use of technology in meeting rooms, lecture halls and classrooms and set out to make the technology responsive to human needs in such settings.

The CHIL team developed systems that could understand the context of meetings and proactively help the participants by controlling the meeting environment. For example, during a meeting such a system can filter and respond to incoming mobile phone calls to each participant, secretly remind participants of facts such as other participants’ names, and provide a virtual shared workspace for all.

Many of the spin-offs from the project involve participants in CHIL, and the technologies the researchers developed.

Project scientific coordinator Dr Rainer Stiefelhagen says the project made some “remarkable” achievements. He points to the project’s advances in building a new system of audio-visual components to monitor and analyse what people do and how they behave in different circumstances.

“We also organised a series of evaluation workshops which attracted people from all over the world, and we were invited to make a lot of presentations about project outcomes,” Stiefelhagen says. “This gave us a high profile in academic and research circles and resulted in lots of citations.”

Achievements fill a book

In fact so high are the interest levels in the achievements of project CHIL, Springer publishing companies intends to publish later this year a book titled ‘Computers in the Human Interactive Loop’.

A comprehensive catalogue of the project’s technological achievements has also been published to attract interest from companies and research houses in pursuing and commercialising the different technologies.

All of these efforts have resulted in a surge of activity following on from the project with some serious money being invested.

The Fraunhofer Institute, one of Europe’s most prestigious research organisations, has selected smartroom technology developed by CHIL. This technology is able to determine what people in a room are doing and saying and react accordingly. A new five-year research project, ‘Visual Perception for the Human-Machine Interaction’, aims at bringing an end product to the market.

The project, which started two months after CHIL ended, aims to develop a user-friendly system in a crisis management scenario.

Researchers will build on the project’s work of observing what people in a room are doing and saying in different situations, by studying how they act in a crisis-management situation.

Smart crisis management

Specifically they will observe how police or fire officials in a crisis management room handle incoming data in an emergency situation, and work out ways in which they can be helped to communicate both amongst themselves and with the outside world.

Stiefelhagen, who is also involved with this project, explains: “We want to build a smart environment supporting this kind of work, with big interactive displays and relevant services and a lot of perception components. Then we will take the best of them and develop them even further for practical applications, after working with the emergency services to see how this can best be done to meet their requirements.”

Another CHIL application is being developed by a new startup company, Videmo Intelligent Video Analysis. Videmo plans to provide the service to retailers and surveillance firms.

The company is developing software for intelligent video analysis with a focus on customer monitoring in retail situations. The system can automatically analyse customer behaviour to, for instance, show which displays are attracting attention and which are not.

On the security side the system can provide video-based monitoring and analysis of point-of-sale systems, including automatic fraud detection.

“There are a lot of CCTVs continually monitoring, but 95% of the time the data is simply recorded and stored away,” says Stiefelhagen. “What this company aims to do is build more intelligent video analysis programs which can perform tasks such as automatically identifying wanted people or recognising suspicious walking and movement patterns.”

House anticipates occupants’ needs.

At an earlier stage of development is a proposed link-up between one of the CHIL project partners and an Italian building company to develop a smart house, moving from the single-room environment which served as testbed for the project’s researchers.

Here the analogy between computer and butler is much closer, with the house computer anticipating the inhabitant’s needs.

Many potential applications for CHIL technologies have also been identified and suggested in the catalogue. One of these is to use SmarTrack, a real-time system that provides accurate information about the spatial location of people in sports analysis.

During TV coverage of a sporting event, real-time tracking of each player would allow for online analysis of the important phases of the game to support the commentators.

Face recognition is another area that may be ripe for development, replacing or augmenting PIN codes and other forms of biometric security access, such as fingerprints.

The potential applications include access control to buildings, offices, computers and the Internet, as well as for the authentication for electronic transactions.

Source: ICT Results

Morris Miselowski’s thoughts:

Only yesterday I wrote about the year 2019 as the year I believe that computer processing power and the brain will meet for the first time.

In 2019 we will have finished mapping the brain (just like the Genome project that mapped our DNA) and that computer processing power will finally be fast enough to mimic the brain.

If we put in to the mix kind of technology this article speaks about, we will see the 2020’s as the decade where computers begin to take over our mundane routinised thinking and allow us to move on to higher order issues and activities.

If apes can plan for the future, why can’t we?

image

Apes can plan for their future needs just as we humans can – by using self-control and imagining future events. Mathias and Helena Osvath’s research, from Lunds University Cognitive Science in Sweden, is the first to provide conclusive evidence of advanced planning capacities in non-human species. Their findings are published online this week in Springer’s journal, Animal Cognition.

The complex skill of future planning is commonly believed to be exclusive to humans, and has not yet been convincingly established in any living primate species other than our own. In humans, planning for future needs relies heavily on two mental capacities: self-control or the suppression of immediate drives in favour of delayed rewards; and mental time travel or the detached mental experience of a past or future event.

In a series of four experiments, Mathias and Helena Osvath investigated whether chimpanzees and orangutans could override immediate drives in favour of future needs, and therefore demonstrate both self-control and the ability to plan ahead, rather than simply fulfil immediate needs through impulsive behaviour.

Two female chimpanzees and one male orangutan, from Lund University Primate Research Station at Furuvik Zoo, were shown a hose and how to use it to extract fruit soup.

They were then tempted with their favourite fruit alongside the hose to test their ability to suppress the choice of the immediate reward (favourite fruit) in favour of a tool (the hose) that would lead to a larger reward 70 minutes later on (the fruit soup). The apes chose the hose more frequently than their favourite fruit suggesting that they are able to make choices in favour of future needs, even when they directly compete with an immediate reward.

New tools the apes had not encountered before were then introduced: one new functional tool which would work in a similar way to the hose, and two distractor objects.

The apes consciously chose the new functional tool more often and took it to the reward room later on, where they used it appropriately, demonstrating that they selected the tool based on its functional properties. According to the authors, this indicates that the apes were pre-experiencing a future event i.e. visualizing the use of the new tool to extract the fruit soup.

One of the decisive experiments excluded associative learning* as an explanation of the results. Associative learning has been suggested to account for the findings in previous planning studies on animals (corvids and great apes), and therefore the previous studies have not been generally accepted as evidence for non-human planning.

article source

Morris Miselowski’s comments:

Wow, is this where futurists evolved from?

Photo of the great grandfather of computers

image

Here is the first known photograph of the great grandfather of modern digital computers – but you couldn’t use it on the train or take it jogging with you.

The panoramic black and white image, which has been unearthed in the archives at The University of Manchester, shows a development version of ‘The Baby’ taking up a whole room with its towering Post Office racks and jumble of wiring.

The Small Scale Experimental Machine – to give the Manchester invention its full title – successfully executed its first program on 21 June 1948 – and paved the way for the computers, iPods and mobile phones we all take for granted today.

Built and designed by Tom Kilburn and Freddie Williams at The University of Manchester, it was the first electronic digital computer capable of storing a program.
There are no photographs of the original Baby from June 1948. The panoramic image is often described as a photograph of the Baby machine, but it actually shows an intermediate stage, beginning to resemble the later University Mark 1.

for the full story click here

Love + sex with robots

Romantic human-robot relationships are no longer the stuff of science fiction — researchers expect them to become reality within four decades. And they do not mean simply, mechanical sex.

“I am talking about loving relationships about 40 years from now,” David Levy, author of the book “Love + sex with robots”, told AFP at an international conference held last week at the University of Maastricht in the south-east of the country.

“… when there are robots that have also emotions, personality, consciousness. They can talk to you, they can make you laugh. They can … say they love you just like a human would say ‘I love you’, and say it as though they mean it …”

Robots as sex toys should already be on the market within five years, predicted Levy, “a sort of an upgrade of the sex dolls on sale now”.

These would have electronic speech and sensors that make them utter “nice sounds” when a human caresses their “erogenous zones”.

But to build robots as real partners would take a bit longer, with conversation skills being the main obstacle for developers.

Scientists were working on artificial personality, emotion and consciousness, said Levy, and some robots already appear lifelike.

“But for loving relationships — that is something completely different. In loving relationships there are many more things that are important. And the most difficult of all is conversation.

“You want your robot to be able to talk to you about what is interesting to you. You want a partner who has some similar interest to you, who talks to you in a manner that pleases you, who has a similar sense of humour to you.”

The field of human-computer conversation is crucial to building robots with whom humans could fall in love, but is lagging behind other areas of development, said the author.

“I am sure it will (happen.) In 40 years … perhaps sooner. You will find robots, conversation partners, that will talk to you and you will get as much pleasure from it as talking to another human. I am sure of it.”

Levy’s bombshell thesis, whose publication has had a ripple-effect way beyond the scientific community, gives rise to a number of complicated ethical and relationship questions.

British scholar Dylan Evans pointed out the paradox inherent to any relationship with a robot.

“What is absolutely crucial to the sentiment of love, is the belief that the love is neither unconditional nor eternal.

“Robots cannot choose you, they cannot reject you. That could become very boring, and one can imagine the human becoming cruel against his defenseless partner”, said Evans.

A robot could conceivably be programmed with a will of its own and the ability to reject his human partner, he said, “but that would be a very difficult robot to sell”.
Some warn against being overhasty.

“Let us not exaggerate the possibilities!” said Dutch researcher Vincent Wiegel of the Technological University of the eastern town of Delft.

“Today, the artificial intelligence we are able to create is that of a child of one year of age.”

But Levy is unyielding. He is convinced it will happen, and predicts many societal benefits.

“There are many millions of people in the world who have nobody. They might be shy or they might have some psychological hang-ups or psycho-sexual hang-ups, they might have personality problems, they might be ugly …

“There will always be many millions of people who cannot make normal satisfactory relationships with humans, and for them the choice is not: ‘would I prefer a relationship with a human or would I prefer a relationship with a robot?’ — the choice is no relationship at all or a relationship with a robot.”

They might even become human-to-human relationship savers, he predicted.

“Certainly there will be some existing human-human relationships where one partner might say to the other partner: ‘if you have sex with a robot I’m leaving you’.

“There will be others who say: ‘when you go on your business trip please take your robot because I happen to worry about the red light district’.”

3-D Viewing without Goofy Glasses

image

With the release of a new set of 3-D video screens next week, Philips Electronics is bringing a sci-fi cinema standby a little closer to everyday use.

Philips’ WOWvx displays–which allow viewers to perceive high-quality 3-D images without the need for special glasses–are now beginning to appear in shopping malls, movie-theatre lobbies, and theme parks worldwide.

The technology uses image-processing software, plus display hardware that includes sheets of tiny lenses atop LCD screens. The lenses project slightly different images to viewers’ left and right eyes, which the brain translates into a perception of depth.

For now, the screens are expensive and not yet marketed for home use. But Philips, which first released the technology in 2006, is working on technical improvements that will make the screens better suited for the home.

Full article