The networked grid: charting the course of the grid’s future


This post is by Sponsored Post from VentureBeat


Click here to view on the original site: Original Post




Editor’s note: This post is sponsored by Greentech Media.

smartgrid-250On November 4, 2009, visionaries from the power, communications and IT industries will converge at the PG&E Auditorium in San Francisco to discuss the future of the smart grid at Greentech Media’s The Networked Grid conference.

The smart grid is rapidly moving beyond the household energy meter to an intelligent, information-rich networked grid. Meanwhile, the electric power industry is in the midst of a transformation to better suit the new applications that are helping to engineer its future. A smart grid transforms the way power is delivered, consumed and accounted for. The existing electric power grid was built to deliver power for mid-20th century requirements, unconcerned with GHG emissions and the emergence of intermittent, often distributed, renewable energy generation, while the smart grid is one that powers and anticipates the modern, internet-enabled home and business for the 21st century and beyond.

Greentech Media has assembled the most relevant, influential and diverse set of speakers together to discuss the topic of intelligently networking the electric grid. PG&E’s Andrew Tang, senior director, smart energy web, and Oracle’s Linda Jackman, vice president of product strategy and management, utilities business unit will deliver the morning and afternoon keynote speeches. Speakers from California’s big three utilities and its public utilities commission – PG&E, SDG&E, SCE and CPUC – will be joined by influential smart grid executives from companies such as ABB, Cisco, Control4, Coulomb Technologies, Enernex, Google, GridPoint, GTM Research, Intel, Oracle, Siemens, Silver Spring Networks, Stanford Research Institute, Tendril, Verizon Wireless.

The course these industry leaders are charting will be directly relevant to the electric grid for decades to come. Panel discussions will examine the next six years of the smart grid with insight into the infrastructure and service deployment plans utilities are beginning to roll out. Other discussions will involve the trends surrounding infrastructure technology, applications, policy, deployments and consumer adoption.

The smart grid of the future is being built and tested today. Discussions at The Networked Grid will have a direct impact on that future. Between the panel sessions on networking renewable energy to the cocktail reception at the day’s end, there will be ample opportunities to network amongst the innovative technology solution providers, thought-leading utilities, influential policy makers and leading industry analysts at The Networked Grid.

The conference promises to provide an informative, balanced and forward-looking day of information on trends surrounding infrastructure technology, applications, policy, deployments and consumer adoption. In addition, GTM Research will unveil the key findings of a new report focused on the North American Utility Smart Grid market outlook through 2015.

The next phase of the smart grid is just beginning, and it starts at The Networked Grid.


Want to Boost LTE Signals? AlcaLu Says More Power to You [GigaOM]


This post is by Stacey Higginbotham from GigaOM Network


Click here to view on the original site: Original Post




iStock_000005540809XSmallAlcatel-Lucent today said it is testing a new technology that will be able to deliver faster and more consistent mobile broadband speeds on next-generation Long Term Evolution networks. It’s just one of a few technologies that will help squeeze all the speed and performance out of mobile networks, so your smartphones will get faster data in more places and have better battery life. This way, even as mobile devices consume more data, eating up bandwidth, people will be less likely to experience the spotty service affecting iPhone users in the U.S.

Alcatel-Lucent researchers at Bell Labs have come up with a way to take MIMO (multiple input, multiple output) to the next level. Just like adjusting the rabbit ears on a TV, changing the antenna configuration can help improve the signal strength on cell networks. MIMO is like boosting the number of those antennas and placing them so they expand the coverage. The researchers have taken that up a notch with a series of technologies called CoMP, short for Coordinated Multipoint Transmission. The telecommunications world loves acronyms, but it basically means they’re going to link the MIMO-enabled base stations together using a low-latency backhaul such as fiber, or less preferably, microwave.

The benefits are a more consistent experience for consumers, because data speeds won’t drop as precipitously when users reach the edge of a cell tower’s range. Think of it as more bars in more places away from the tower, or fewer dropped calls as you drive away from a cell site. The technology is still at the research stage, but within the next few years or so, standards will come out dictating how CoMP might be used to boost network coverage. Operators will likely have to update the software on their gear to make this work, but it won’t require a wholesale upgrade.

Meanwhile, Quantance, a provider of a component that resides in the cell phone, has a technology that smooths out the radio signals to help phones be heard by the tower and boost their ability to hear the signals coming from it. That’s good for your handset’s battery life because the easier it is to hear a signal, the less power a battery has to expend on boosting a signal. And who doesn’t like more talk or web surfing time?

As data usage continues to grow, carriers are searching for ways to deliver dependable service to consumers with limited spectrum resources and without blanketing the nation with towers. Research into more efficient use of existing spectrum, such as what Alcatel-Lucent is doing, as well as improvements on the handset side will help, but figuring out how to optimize the radio networks will be a problem everyone will eventually need to think about.

Real-Time Web Summit Keynote


This post is by Frederic Lardinois from ReadWriteWeb


Click here to view on the original site: Original Post




summitlogo_150wide.pn
Our own Marshall Kirkpatrick kicked off our Real-Time Web Summit at the Computer History Museum in Mountain View today. Marshall, who spoke with over 40 different vendors over the last few months in preparation for this event, presented a high-level overview of what he thinks the recent developments around the real-time web will mean for companies and users. Specifically, Marshall stressed the fact that real time doesn’t just mean speed but also creates value by including presence data, flow and data syncing. All of this, according to Marshall, will lead to radical changes in how users will experience the Web in the near future.

Sponsor

Creating Value on the Real-Time Web

Starting out, Marshall discussed some of the usage cases of the real-time Web, ranging from people-to-people services like Twitter and Olark to services that focus on machine-to-machine communication and enable services like Friendfeed and Google Reader. Services like Advark, which provide links between people and machines, and machine-to-people services like NotifyMe and PostRank fall in between.

rtwcontinuum.jpg

Information Overload

This new river of data, of course, could easily lead to total information overload. In the best case scenario, the tools will get so good that we won’t be overwhelmed by all of the data coming at us. In the worst case, of course, we could lose the usefulness of the real-time Web if the flow of data becomes too overwhelming for users, or compromise usefulness in order to reduce information overload.

rtwinfooverload610.jpg

Standards

As Marshall pointed out, though, we are only laying the railroad tracks for this future of the real-time Web right now. Services like Pubhubsubbub and RSSCloud are currently building the infrastructure that will make these major changes on the Internet happen, though the standards that will make the real-time Web possible are still evolving.

The question, of course, is how these standards will evolve. While some standards bodies are currently trying to create them, chances are that some standards will evolve naturally as certain vendors become dominant.

Bringing the Real-Time Web to the ‘Slow Web’

Marshall pointed to Facebook’s Global Happiness Index as an example for the kind of product companies can develop based on data created on the real-time web. He also looked at a number of companies like Evri, FirstRain and JS-Kit’s Echo that are bridging the gaps between relatively static pages like blogs and the real-time web.

Discuss


Integrated Gmail Updates with Improved Looks and Handy Features [Downloads]


This post is by Kevin Purdy from Lifehacker


Click here to view on the original site: Original Post




Firefox: Integrated Gmail is a clever way to load any Google app on one landing page. With version 2.0, it also adds a lot of interface fixes and helpful features, in the style of a certain well-known Gmail extension.

The basic functionality of Integrated Gmail remains the same—load in other Google Apps, like Reader, Calendar, Chat, Tasks, or whatever you’d like, and set how big they are when you click to expand them. New to 2.0 are several features included in Better Gmail 2, like message counts in your web favicon, and a few that are just all-around neat: universal drag-and-drop between left and right sidebars, sidebar and title bar hiding, multiple inbox support, theme detection and compatibility, and much more, detailed at the developer’s Mozilla page.

Of course, loading multiple Google apps in a single page can introduce a good bit of lag, and we saw some incessant Gmail load flickering in our installation. With some adjustments, though, you can probably create a single-page inbox for all the stuff Google wants to serve you.

Integrated Gmail is a free download, works wherever Firefox does.






Apple’s iPhoto Makes It Way Too Easy To Delete Your Entire Flickr Library


This post is by Jason Kincaid from TechCrunch


Click here to view on the original site: Original Post




Apple has long been associated with the saying “it just works”. Well, sometimes it apparently works a little too well, to the point of allowing users to delete their entire Flickr libraries in one fell swoop without really meaning to. Oops.

The problem stems from the way Apple’s popular iPhoto software is integrated with Flickr. Recent versions of iPhoto allow users to sync specified albums with Flickr, which means they can automatically upload new photos as soon as they import them into iPhoto from their cameras, and change their captions for both at once. The problem is that iPhoto treats this syncing very literally: if you delete a photo from one of these albums on iPhoto, it doesn’t just remove it from the Set on Flickr — it actually deletes the photo from your Flickr account entirely.

iPhoto apparently informs users that when they stop sharing a photo album between iPhoto and Flickr, “The album no longer appears on Flickr, but the photos remain in your [iPhoto] library.” The wording is both ambiguous (Apple could just mean it’s deleting the photos from the Flickr set) and not nearly strong enough to suggest that it’s actually deleting data. And plenty of people have made that mistake.

Over the last several weeks this has led to a number of threads in Flickr’s help forum where some users are up in arms after accidentally deleting hundreds of photos at once.



Fortunately, Flickr is taking notice. A Flickr engineer has tweeted about how bad the design is, and a staff member in one of the threads wrote that Flickr was discussing the matter internally, and later followed up to say that they were discussing the issue with Apple. Hopefully this will be resolved shortly.



It’s worth pointing out that this is probably exactly how Apple designed the syncing functionality to work in the first place. After all, syncing with Apple devices and MobileMe works the same way: delete something on your computer, and it deletes it elsewhere. But there’s no way anyone should be able to delete hundreds of photos at once without knowing full well what they’re about to do. Apple (and Flickr, for that matter) have failed to to made it abundantly clear to users just what photo syncing really means, and that’s just bad design.

Photo via Flickr.

Crunch Network: MobileCrunch Mobile Gadgets and Applications, Delivered Daily.




Michael Jackson’s Posthumous Album Is Coming to iTunes [TheAppleBlog]


This post is by Liam Cassidy from GigaOM Network


Click here to view on the original site: Original Post




this-is-it

A storm sprang up this week around reports that, due to disagreements between Apple and Sony BMG, the upcoming Michael Jackson album “This Is It” — a tie-in to the movie of the same name and bound to be a sales success — would not be available on iTunes, the world’s biggest digital music provider.

When Michael Jackson tragically died in late June, sales of his music on iTunes sky-rocketed. A day after he died, eight of the 10 top-selling albums were from Michael Jackson. Eight of the 10 top-selling music videos, too. Five of the 10 top-selling singles were also from Jackson. It was a trend that would continue for weeks. With interest in (and thirst for) Michael Jackson music and video at an all-time high, online music vendors have a vested interest in the new album.

So it came as something of a surprise when, two days ago, news broke that iTunes was to be denied the chance to sell the upcoming album. Paul Reskinoff reported that, according to confidential information leaked to Digital Music News, Sony BMG and the Jackson Estate were insisting downloads could only occur within the constraints of a bundled, full album. So, if a customer wanted just one song from “This Is It,” they’d be forced to buy and download the entire album to get it. Apple’s policy, on the other hand, is well established in these matters; it insists on making individual tracks available for purchase and download. Hence the current standoff.

In his MediaMemo column on All Things Digital, Peter Kafka writes:

…the story is a familiar one, because it’s a longstanding dispute between Apple (AAPL) and the music business. The industry, for both financial and artistic reasons, has tried to keep music bundled together, while Apple insists on selling it a la carte.

Apple usually wins these disputes: Even the stubborn iconoclasts in Radiohead eventually bowed to Steve Jobs’s will and turned their precious albums into individual songs.

Lois Najarian, Sony’s Senior Vice President of Publicity yesterday told Wired.com:

I’m happy to report that… Michael Jackson’s This Is It album will indeed be for sale on iTunes Oct. 27. I don’t have much more information to impart other than that right now, but suffice [it] to say fans will be able to purchase it there.

It was always unlikely Apple would have been blocked from selling the new album — the contract Apple has with Sony BMG to distribute its catalogue of music would see to that. So the question now is not if the album will be sold on iTunes, but how? Najarian doesn’t say, adding only that Apple and Sony BMG are “working on that now.”

Will Sony acquiesce to Apple’s rules, or, controversially, could we see Apple agree to the album-bundling method? It’s not impossible, given how well this release is expected to sell; Apple might be prepared to make an exception to its own (usually immutable) rules in favor of meeting the guaranteed demand of its iTunes customers. Plus, I’m sure the sales revenue it generates will not be unwelcome, either.

Whether temporarily or otherwise, if Apple does indeed make an exception and bow to Sony’s wishes, it’s a decision sure to cause frustration and anger amongst iTunes customers. And you can bet your bottom dollar we’ll be hearing from some pretty miffed artists unhappy they weren’t afforded the same special treatment as the late, great, King of Pop.

ProfileSnaps Lets Publishers Integrate Rich Media Contextual Pop-Ups


This post is by Leena Rao from TechCrunch


Click here to view on the original site: Original Post




Recently launched ProfileSnaps allows for additional context for content on news websites and blogs, by providing in-text profiles that gives the reader a snapshot of information about a public figure. The information appears in a pop-up window, and is dynamically updated to provide the latest news and information about the person.

So if you clicked on Barack Obama’s ProfileSnap, you’d see an about section, which gives a short bio of the U.S. president; a Twitter section that shows his latest Tweets; a news section with headlines that relate to him; videos of him from YouTube and a photo section that shows a slideshow of pics of President. The profile also includes a links section to the individual’s Wikipedia profile and more. And ProfileSnaps can be saved by viewers, which will allow them to track and follow an individual’s profile, including a stream of all relevant news and social activity.

To enable ProfileSnaps, you embed a line of code into your post and another line of code around the individual’s name and the link to the profile will automatically be seen in the post. Once a site enables ProfileSnaps, it will scan a page for known names will include profiles for the people who the startup has information for. If there are names that aren’t in ProfileSnaps’ directory, the site will fetch information on those people from a variety of sources and combine them into a single mini-profile, building an on-the fly profile. Profile Snaps faces competition from Apture and SnapShots, which both provide conextual pop-ups for publishers.

Crunch Network: CrunchBoard because it’s time for you to find a new Job2.0




Apple Hints at Mac Counterattack on Windows 7 [TheAppleBlog]


This post is by Charles Jade from GigaOM Network


Click here to view on the original site: Original Post




With the release of Windows 7 next week, senior Apple VP Phil Schiller is boldly asserting that it “presents a very good opportunity for us.”

That opportunity will possibly come in a series of ads contrasting Windows with OS X, at least according to Peter Burrows of BusinessWeek. The expected campaign is expected to take Windows 7 on directly, and will likely “poke fun” at the upgrade process, from backing up data and reformatting drives to reinstalling software.

“Any user that reads all those steps is probably going to freak out. If you have to go through all that, why not just buy a Mac?” says Schiller.


The idea is that, rather than upgrade, people will be buying new computers, but the problem with Macs — especially in difficult economic times — is price. To that end, rumors continue to swirl regarding price reductions. Just last week, Google AdSense placements temporarily appeared in several European countries hinting at new iMacs, Mac minis and MacBooks. While only the Mac minis listed lower prices, it’s certain that new MacBooks and iMacs will have speed and storage increases, and the rumor of Blu-ray for the iMac persists.

In the interview, Schiller deflected inquires about new Macs and lower prices, remaining dismissive towards Windows and predicting a poor upgrade rate for Windows 7 compared to Snow Leopard. In the end, Windows is “still Windows.”

Do You Trust Small Companies With Your Data More, Or Big Ones?


This post is by Louis Gray from louisgray.com


Click here to view on the original site: Original Post





A few of this summer’s acquisitions featured a scrappy upstart much beloved by the Web masses getting absorbed by a larger, more-established acquirer – with two of the more prominent examples being Intuit’s buy of Mint.com and Facebook’s takeover of FriendFeed. And amidst the ensuing responses, I saw two truly oppositional reactions – the first from people who swore they would never use the larger company or service because they hated it or didn’t trust it, and the second, from people who now thought it was “safe” to use the smaller service as it finally had some parental supervision.

I recognize that some people have a greater tendency to accept risk in their lives, including risk to their data, than do others. Some lines of business and people operating those businesses are as a rule conservative – not venturing to buy one company’s goods until they have done a full background check on the firm’s financial stability, or have seen a flurry of similar use cases from peers. Others flock toward a series of early adoptions, where a personal relationship with a site’s founders or employees is possible, thanks to the product’s newness. And no doubt, the two sides rarely agree on a set strategy.

What are the underlying concerns both parties may have?

For Those Who Favor Big Companies Over the Upstarts

  • A small company may not have taken all necessary precautions to protect their data, making it vulnerable.
  • A small company may not have longevity, and if it expires, so too could your data.
  • A small company may grow desperate for funds and could sell your personal information.

For Those Who Favor Small Companies Over the Giants

  • A large company is more likely driven by sheer dollars than by customer service.
  • A large company may have a history that contains questionable moves.
  • A large company may act unilaterally in terms of how your data is used.

In parallel with the two acquisitions I had mentioned, there have been a few isolated cases of the smaller company putting itself up for auction, essentially turning its user base into a marketing list for sale to the highest bidder, whether or not that may contain personally identifiable information, or possibly passwords. But in parallel, you can see people who strongly dislike Google, don’t trust Microsoft, or think that Facebook is evil. I even saw a post go up yesterday saying that Cisco was evil. The bigger they are, the bigger a target they are.

I tend to trust companies rather than distrust them. I am an optimist. I think there is a possible point where personal relationships with the founders trumps a robust multi-tier support system or flashier GUI. But it’s not for everyone. What are your thoughts, and do mega mergers change the way you perceive your data being protected?

Free e-Book: “Web Work 101: How to Escape the Cubicle” [WebWorkerDaily]


This post is by Simon Mackie from GigaOM Network


Click here to view on the original site: Original Post




webwork101coverEarlier this year, the WebWorkerDaily team put together Web Work 101, a series of great posts for beginning web workers. I decided to collect the best of them in a free downloadable e-book: “Web Work 101: How to Escape the Cubicle.”

While the prospect of working from home (or maybe by a swimming pool, or by a beach hut somewhere exotic) sounds very enticing, it can also bring its challenges. How do you stay motivated when you’re not surrounded by your colleagues? What do you do if a critical piece of hardware fails? What are the best applications to use to stay productive? How do you make sure that you maintain good work/life balance?

This 33-page e-book should help answer those questions. It contains advice on what tools you need to use; what traits successful web workers have; how to set up your home office; planning and budgeting; how to avoid loneliness; and much, much more. If you’re a new web worker, or are thinking about becoming one, it’s worth checking out — download it here.

Let us know what you think of “Web Work 101: How to Escape the Cubicle” in the comments.

Quickly Copy File Paths to Your Command Prompt via Drag and Drop [Terminal Tip]


This post is by Adam Pash from Lifehacker


Click here to view on the original site: Original Post




Windows/Mac/Linux: If you spend much time at a command/shell prompt, you’re probably very comfortable navigating from one folder to the next—but rather than manually typing through folders to find a file buried in your filesystem, just drag and drop instead.

Next time you want to change directories (cd to a folder deep in your filesystem but you’re looking directly at that folder on your desktop, for example, just type in cd, then drag and drop that folder into your command prompt, and voilà. The simple drag-and-drop trick does the job any time you want the path to a file or folder without a lot of hassle.

This isn’t a new feature by any means. You’ve long been able to drag and drop a file to the terminal in OS X and Linux, and weblog Addictive Tips reminds us (and the How-To Geek explains) that this functionality was also available in XP, broken in Vista, and now back in Windows 7. So even though it’s an oldie, if you spend much time at your operating system’s command prompt and haven’t used this one, it’s extremely handy.

Got a favorite terminal navigation shortcut of your own? Let’s hear it in the comments.






Open Source Collabtive Makes Project Management a Breeze [OStatic]


This post is by Guest Editor from GigaOM Network


Click here to view on the original site: Original Post




By Lisa Hoover

When you’re collaborating with a team that’s flung across the globe, sometimes you need collaboration software with some heft to help you get the job done. Open source Web-based project manager, Collabtive, might be just the tool you’re looking for. It has several features that make it a great alternative to proprietary alternatives like Basecamp.

Collabtive can handle unlimited projects, tasks, lists, and milestones, and there’s no cap on the number of team members that can access the project manager. Each user desktop gives an overview of projects, tasks, messages and a calendar. It’s easy to minimize the modules or export data with just one click.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 Add a project with all the accompanying details like due date, budget, and team members. Use the editor to add images, links, bullet points, and videos in four different formats. When you add dates to tasks or projects, they’ll automatically show up in your calendar. A single click opens a new window with details of what’s due that day.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

It’s easy to message other team members right from Collabtive and tag messages for easy search later. As is true in the project screen, you can add images, videos, links, etc. to anything you send.

My messages @ Collabtive

 

 

 

 

 

 

 

 

 

 

 

 

Collabtive tracks your activity and generates a report that shows time spent per project or cumulative hours across all projects. You can also filter report data for more granular information.

My projects @ Collabtive

 


 

Collabtive is completely themeable and available in more than 25 languages. Export reports and activity logs in PDF format, and sync your calendars via iCal. While Collabtive may not be feature-rich enough for extremely complex project management needs, it’s great for small teams that just need to make sure everyone’s on the same page.

Collabtive report

 

Related Activities

Related Blog Posts

Siemens Snaps Up Solel for $418M, Eyes Solar Thermal Expansion [Earth2Tech]


This post is by Josie Garthwaite from GigaOM Network


Click here to view on the original site: Original Post




Parbolrinnenkraftwerk Mojave / Parabolic trough power plant MojaSolar thermal power company Solel Solar Systems has found an exit. Less than a year after Solel raised a gigantic $105 million investment from London-based firm Ecofin to help finance a plant in California’s Mojave Desert, Siemens has announced today that it is buying the Israeli company from Ecofin (and another unnamed major shareholder) for $418 million.

In a time when the merger and acquisition market for cleantech startups is about as dry as the Mojave, today’s deal is a head turner. And with Siemens angling to expand its role in solar thermal, the raft of startups now leading this space — companies like BrightSource, eSolar, Ausra and SkyFuel — could find themselves with a tough, deep-pocketed new competitor.

Being big has advantages in the solar thermal market, as Fred Morse, a senior U.S. adviser for the solar arm of Spanish renewable energy giant Abengoa, told us in an interview last year. Decades of project financing experience and an R&D budget on the scale of of tens of millions of dollars can give companies like Abengoa — and Siemens — an edge in the race to build massive solar systems that can require investment of hundreds of millions to billions of dollars.

solel-solar-fieldHeadquartered in Beit Shemesh, Israel, with a subsidiary on the U.S. West Coast, Solel has built a workforce of more than 500 people over the last 14 years and snagged a deal back in 2007 to sell power generated at its Mojave project to California utility PG&E. The company has been on the hunt for a buyer for the last 6 months as part of an effort to gain a higher profile internationally and raise funds for more big-budget projects.

According to Siemens’ release this morning, Solel posted a revenue of nearly $90 million for the first half of this year, thanks to a solar receiver supply business (a key component for parabolic trough projects like the one pictured above) and its work engineering, planning and building solar fields. “In the future,” Siemens Renewable Energy Division CEO Rene Umlauft said in the release, “we’ll be able to offer the key components for the construction of parabolic trough power plants from a single source,” (Siemens already supplies steam turbines for solar thermal plants), and also boost solar thermal plant efficiency.

Siemens expects the market for solar thermal power plants to see “annual double-digit growth rates and attain a volume of over €20 billion” by 2020. As early as June, Siemens was reported to have entered talks with Solel for an acquisition deal, and Germany’s Handelsblatt newspaper (h/t Dow Jones) reports this morning that Siemens beat out French bidders Alstom and Areva by upping its initial offer of $250 million — interest that bodes well for solar thermal startups.

For more background on Solel, check out our Q&A with CEO Avi Brenmiller.

Augmented Reality — There’s an App, er Platform, for That [jkOnTheRun]


This post is by Kevin C. Tofel from GigaOM Network


Click here to view on the original site: Original Post




This summer I got my first taste of augmented reality on a smartphone and found it a little bland. Part of the reason is because I live in the country where cows, deer and even bison can be found. I have yet to find an AR app that shows me where the nearest herd is and what they’ve tweeted or tooted. Usually, my not so technical nose can detect the toots, so there’s no need for any mobile tech in that area. Although my locale isn’t yet the best to test AR apps, I’m hitting up the iTunes App Store this morning to grab Layar, which just arrived.

Unlike Yelps augmented reality “easter egg” add on — the Monocle is now a standard feature on the iPhone — Layar’s entire core revolves around augmented reality. The app is essentially an AR platform to superimpose local data from various sources in a visual view of your world:

On top of the camera image (displaying reality) Layar adds content layers. Layers are the equivalent of webpages in normal browsers. Just like there are thousands of websites there will be thousands of layers.” (emphasis mine)

By creating a platform capable of a nearly limitless number of data layers, Layar took the opposite approach I’m seeing from other AR challengers. Instead of building a local data repository first and then adding AR, Layar built the AR platform first so that data repositories could easily be added. It’s a subtle, but very powerful difference and although this market is in its infancy, I think it might give Layar a competitive advantage over other contenders. Now, where did those bison wander off to?



Law.Gov: America’s Operating System, Open Source


This post is by Carl Malamud from O'Reilly Radar - Insight, analysis, and research about emerging technologies.


Click here to view on the original site: Original Post




Public.Resource.Org is very pleased to announce that we’re going to be working with a distinguished group of colleagues from across the country to create a solid business plan, technical specs, and enabling legislation for the federal government to create Law.Gov. We envision Law.Gov as a distributed, open source, authenticated registry and repository of all primary legal materials in the United States. More details on the effort are available on our Law.Gov page.

The process we’re going through to create the case for Law.Gov is a series of workshops hosted by our co-conveners. At the end of the process, we’re submitting a report to policy makers in Washington. The process will be an open one, so that in addition to the main report which I’ll be authoring, anybody who wishes to submit their own materials may do so. There is no one answer as to how the raw materials of our democracy should be provided on the Internet, but we’re hopeful we’re going to be able to bring together a group from both the legal and the open source worlds to help crack this nut.

The idea for Law.Gov seems to be getting a good reception in Washington, D.C. Senator Lieberman, writing on behalf of the Senate Committee on Homeland Security and Governmental Affairs, the committee responsible for the E-Government Act, has already accepted our request to submit our report to the Committee. Additional formal requests to submit the completed report are outstanding.

Law.Gov is a big challenge for the legal world, and some of the best thinkers in that world have joined us as co-conveners. But, this is also a challenge for the open source world. We’d like to submit such a convincing set of technical specs that there is no doubt in anybody’s mind that it is possible to do this. There are some technical challenges and missing pieces as well, such as the pressing need for an open source redaction toolkit to sit on top of OCR packages such as Tesseract. There are challenges for librarians as well, such as compiling a full listing of all materials that should be in the repository.

Law.Gov is an outgrowth of 3 years of work we’ve done at Public.Resource.Org along with our numerous colleagues in the open law movement across the country. There have been a series of piecemeal successes which have demonstrated that there is a demand and a need for more legal information to be more broadly available. I’m hopeful now that a truly national movement may have coalesced and that there is at least a chance we can bring this across the finish line and create a new function inside of government, the publication of America’s operating system on an open source platform.

The factor that made this coalesce was the recent Government 2.0 Summit put on by Tim O’Reilly. I gave a talk at that summit about the need to put primary legal materials on-line, and it was gratifying to hear the Deputy CTO of the United States, in his closing keynote, highlight that as one of the issues which he thought the White House should help make real through their “moral authority and convening power.” The Government 2.0 Summit was also an example of convening power, and I was very pleased that it was more than yet another conference about open government, it was a forum that brought together people interested in creating real change. Tim O’Reilly, as the Convener-in-Chief, should be congratulated, and I’m hoping that future Summits lead to even more concrete results.

A Personal Account On Getting Deeply Involved With Apache [OStatic]


This post is by Guest Editor from GigaOM Network


Click here to view on the original site: Original Post




As we reported recently, the ApacheCon 2009 conference is rapidly approaching, to be held November 2nd through 6th in Oakland, California. The conference will feature sessions and speakers talking not only about web server- and services-related topics, but about the Hadoop software framework for data-intensive queries, and the many sub-projects that the Apache Software Foundation oversees. The event is partly intended to mark the 10th anniversary of the Apache Software Foundation, and we already ran a post from Jim Jagielski, co-founder and chairman of the foundation, on Apache’s future, and a post from Justin Erenkrantz, who is the president.

As another post in our Apache series, today we offer up a guest item from Shane Curcuru, a director at ASF, on his personal experiences with the foundation. Here it is.

How I Learned to Stop Worrying and Get Involved in the ASF

By Shane Curcuru, Director, Apache Software Foundation

With ApacheCon quickly approaching, I’m excited to meet fellow contributors from The Apache Software Foundation (ASF), particularly those I’ve never met before, in spite of collaborating with them for years. This is a frequent scene at our conferences, where many committers meet for the very first time. Invariably, one of the most commonly asked questions is, “so how did you get involved with Apache?”. Anticipating the follow-up “did your boss understand/support your work?”, here’s my story:

In the summer of 1999 I was lucky to be working at IBM just as they donated their LotusXSL code to the Apache XML Xalan project – one of the first projects at the ASF. From there, I’ve continued to get involved, step by step, into other areas of the ASF, and this year was elected to be on the Board of Directors. There are a few good tips to share along the way.

Committers and Projects

When IBM donated LotusXSL to Apache, I became a Xalan committer. Part of my IBM job responsibilities were learn the Apache Way, and to “shepherd” my IBM teammates to work within the community –- something I looked forward to. At the time, I had no idea where that decision would lead me.
As shepherd, I dove into learning how Apache projects worked. It was clear to me that the first lesson was that Xalan belonged to the Xalan community -– not IBM (our employer), nor to “us” (the original creators). We were careful to make community decisions for Xalan itself, separate from LotusXSL: builds that we provided for IBM. The pragmatic Apache License allows this, ensuring corporations can build proprietary solutions atop community-driven Apache projects.

Making community decisions about Xalan means using open mailing lists — as the saying goes, “If it doesn’t happen on-list, it didn’t happen.” As the Xalan community grew, we got great input from users and developers around the world. Working with my IBM co-workers, I was careful to relay details of any in-person meetings to the list, ask for community feedback, and encourage my teammates to participate in the lists. Open lists paid off technically, too, particularly when IBM and Sun collaborated within Xalan on major new XSLT processor features and standards.

My next big step was attending ApacheCon 2001. The presentations were great, but more important to me was meeting members of the community. I was impressed with the friendliness and openness of everyone there. Even as a newcomer, people were happy to answer questions and include me in their discussions. I attended the ASF Member’s meeting there as a guest, and even got to ask a question of my own.

Membership and Committees

My interest in how the ASF works was recognized, and I was lucky to be elected as an ASF Member in 2002. I continued attending ApacheCon each fall, learning the technology from the original committers during the day, and getting to know them over beers in the evenings. Once you start asking questions, it’s easy to get involved in new projects. I was curious how ApacheCons were run, so I asked some questions of Ken Coar, the then-VP of Apache Conferences (Concom). He invited – or should I say dragged – me to a Concom meeting, where I went in being curious, and came out volunteering to help. One of the core concepts of meritocracy within the Apache community is “do-ocracy”: if you have good ideas and volunteer to do the work, you often end up making the decisions too. It wasn’t long until I was voted onto the Concom PMC (Project Management Committee), and attended future ApacheCons as a member of the planning team. When my job at IBM shifted away from Xalan, I wound up cutting my technical work on Apache Xalan, but continued to volunteer on Concom and the PRC. For me, the public recognition of my personal efforts is well worth the extra time I spend.

The most important takeaway about getting involved at Apache is simply: “just do it”. It may sound trite, but whether it’s on the mailing lists, or in person at ApacheCon, simply jump in and ask! If you have an idea or patch, share it and ask for feedback. Have the answer to a question? Send it to the list! “Patches welcome” is not just a slogan; it’s an invitation to participate in our communities, and to be recognized for your participation.

I look forward to seeing you at ApacheCon and hearing your Apache story.

*****************************

Shane Curcuru is currently a Director of the Apache Software Foundation, and serves on its Conferences (Concom) and Public Relations Committees (PRC). He is also an Applications Architect at IBM, and a contributor to the Apache Xalan and XML-Commons projects. You’ll find him volunteering behind the scenes at ApacheCon US 2009 in Oakland, helping to ensure the conference and the ASF’s great 10th Anniversary events come off without a hitch.

Related Activities

Related Blog Posts

Are You Planning to Upgrade to Windows 7 [Reader Poll]


This post is by Adam Pash from Lifehacker


Click here to view on the original site: Original Post




Windows 7 officially drops on October 22nd, so with just one week before its release, we’re wondering:

Give us the specifics on your upgrade plans in the comments.






Vid-Biz: Channel 4, MLB, RedLasso [NewTeeVee]


This post is by Chris Albrecht from GigaOM Network


Click here to view on the original site: Original Post




YouTube Gets Channel 4 Content; all of Channel’s VOD catch-up shows will be on YouTube shortly after their broadcast — but only to users in the UK. (The Hollywood Reporter)

MLB Served Up 350,000 Live Streams Per Playoff Game; roughly 10 percent of those are viewed on the iPhone/Touch. (paidContent)

RedLasso Relaunches; news clip-sharing service is resurrected with content from more than 100 TV stations. (paidContent)

Edgecast Says It Is Profitable; the CDN announces that it became profitable in Q3 of 2009, and has been EBITDA positive since Q2 of 2009. (emailed release)

Honestech Releases FOTOBOX Plus; USB device helps you creat slideshows from photos and videos. (emailed release)

Climate Change and Transportation: A Few Things Web Workers Can Do [WebWorkerDaily]


This post is by Charles Hamilton from GigaOM Network


Click here to view on the original site: Original Post




Source: U. S. Department of Transportation http://www.climate.dot.gov/about/transportations-role/overview.html

Source: U. S. Department of Transportation http://www.climate.dot.gov/about/transportations-role/overview.html

Transportation is the second-largest source of U.S. greenhouse-gas emissions, accounting for 28 percent of the total. As web workers, many of us can choose where we work, and how we get there.

I’ve been lucky enough to have a career that’s allowed me to live and work in places where I don’t need a car. There are many such places, even in North America, where being car-less is possible and desirable. (Of course, in most of Europe, it’s much easier.)

If you are contemplating moving, consider finding a residence where it’s possible to live without a car, or reduce the number of cars your family owns. Many city dwellers rely on public transit, and use taxis for the occasional shopping trip. My company has a membership in Zipcar, which allows us to rent vehicles by the hour when we need them, at very reasonable rates.

In most cases, you can save considerable money by not driving. But transportation alternatives can require investments of time that busy web workers may feel we can’t afford to take. That’s why it’s important to find locations where transportation options are reliable and convenient. After many years of building highways while neglecting other modes, many localities are now realizing the importance of encouraging light rail, commuter trains, and bus rapid transit, while making improvements for traditional buses, pedestrians, bicyclists, carpools and vanpools.

Even in less urban areas, transit-oriented development is becoming popular. Housing with convenient access to transit may be priced higher, but will actually save you money. According to Malcolm Kenton of the National Association of Rail Passengers:

“As you get farther from the center of a city, housing gets less expensive, but transportation costs grow at a higher rate than the cost of a home drops. The opposite occurs as you get closer in. Residents of outlying suburbs who depend on their cars spend an average of 25% of their household income on getting around vs. 9% for those living in walkable neighborhoods with good transit connections.

If transportation costs were considered as a factor in the affordability of housing, the whole equation would change in favor of denser, less car-dependent neighborhoods.

Nationwide, only 20% of housing units lie within half a mile of a bus or train stop, but in many larger cities, that figure is over 60%–even in places like Houston, Salt Lake City and Denver. Transit-oriented development doesn’t necessarily mean high-rise apartment buildings. It can also include townhomes and small single-family homes that are close together and laid out well enough to encourage walking.”

Of course, not everyone is in a position to make major lifestyle changes. But here are a few simple things you can do to reduce climate change:

And, of course, if you aren’t already working from home, try telecommuting at least one day a week. WWD has some great tips for working from home effectively.

No matter where you live and how you get around, you and your children will be affected by climate change. All of us need to make whatever transportation changes we can, be they large or small. We also need to advocate for more earth-friendly forms of transportation. Support national and local projects that increase transportation efficiency. Join groups that encourage such projects. Nationally, I’m a long-time member of the National Association of Rail Passengers; locally, I’ve worked on a number of transit projects.

How are you advocating for more earth-friendly transportation systems? How have you made your transportation more efficient?