Uber is now dispatching self-driving cars to a select group of users in Pittsburgh, and invited us along for the ride. I hailed a car from Uber’s Advanced Technologies Campus Tuesday and took a 45-minute ride through the city’s downtown, eastern and northern neighborhoods. I watched as the wheel turned itself, carefully maneuvering us through crowded streets and around other cars… Read More
Beginning today, a select group of Pittsburgh Uber users will get a surprise the next time they request a pickup: the option to ride in a self driving car. The announcement comes 1.5 years after Uber hired dozens of researchers from Carnegie Mellon University’s robotics center to develop the technology. Read More
He mastered flying it, and then got a bigger drone. And then another bigger drone. Then came fighting the drones and building his own. Soon, Ettinger was considered by many to be the best drone fighter in the world.
Ettinger is only 16 years old, but he’s found success by getting to the heart of drone battling: Dream up something weird and make it real. Last year he surrounded his… Read More
Across 1,000 acres of cotton plants in Arkansas, Tyler McClendon is running an experiment. The seeds are the same, as are his company Oxbow Agriculture’s methods of growing them. But just ahead of planting the cotton seeds in April, Oxbow dusted them with a special microbe not usually found on its cotton plants. Their presence is expected to increase McClendon’s cotton yield by… Read More
Sam Cervantes is a quiet-seeming guy who speaks earnestly about his line of work. When I visited his Brooklyn 3D printer factory in 2013, workers in an assembly line were busy putting together Solidoodle printers. An army of assembled printers whirred as they lay down layers of melted plastic to make parts for the next set of machines. Read More
As drone pilots step out of isolated hobbyist groups and into the spotlight, many are asking themselves the same question: What will it take to make drone racing a sport acknowledged and embraced the world over? Read More
In 1895, audiences sat down to watch “L’Arrivée d’un Train en Gare de La Ciotat,” an early film that showed a train pulling into a station. Legend has it that when viewers saw the train barreling toward them they panicked, because they hadn’t experienced the new medium. Whether the story is true or not, virtual reality filmmakers are recalling it with interest. Read More
The public’s taste in drones is growing more sophisticated. The industry is improving so quickly that major new features are released every year, if not every few months. If a company wants to sell to the increasingly savvy entry-level quadcopter space, it needs to build a drone that can do more than simply fly. Read More
It’s tough to decide what to look for in a 3D printer these days. The oldest machines use sturdy, reliable parts, but can feel intimidating to the beginner. 3D printers aimed at the mass market favor closed-off designs meant to make them simpler to use, but they usually don’t live up to that promise and become even bigger headaches to fix.
The Lulzbot Mini is the new mid-range ($1,350) 3D printer from Aleph Objects, which has traditionally made printers for relatively experienced users. The machine has the same stripped-down, industrial feel of early—and technologically challenging—desktop 3D printers. But over several months of testing it proved so reliable and easy to use that I am convinced it belongs in the mass-market printer category.
Unboxing a 3D printer for the first time can be scary. Did it break during shipping? Do you need to calibrate it? Where does this weird part go?
Luckily, the Lulzbot Mini comes with a thorough guide to getting started. A twig of filament—the spooled string of plastic that feeds into the printer—comes pre-loaded in the printer with directions on how to use it for a test print.
When that runs out and you move to an actual spool, which hangs from a bar at the top of the printer, the guide walks through how to pop open the print head and load in the fresh filament. Like much of the printer’s innards, the latch you pop open is made of 3D printed plastic. It feels a bit flimsy. But the filament loaded easily enough, and if the compartment ever breaks, Aleph provides the files to just 3D print a replacement part.
The Lulzbot Mini’s print bed is made from a plastic called polyetherimide (PEI) that requires no maintenance. Most 3D printers need to have their bed prepped with painters tape, glue or hairspray to ensure the printed plastic sticks. PEI adheres to plastic on its own and does not need to be cleaned between prints.
Once everything is set up, you load a model into the software and hit print. Then, that’s it. The Lulzbot Mini and its software take care of everything else.
The Lulzbot Mini runs a custom version of Cura—the open source 3D printing software produced by popular 3D printer maker Ultimaker. Those who have used other 3D printers will have an easy time with Cura, but users new to the machines will likely find it intimidating. That feeling quickly abates.
Cura opens with a 3D workspace. You load in a file the same way you would open a document in Microsoft Word. Short menus on the left side provide options such as print quality and filament type. Models can also be rotated and scaled. It’s not beautiful, but it is simple and intuitive.
When you strap on an Oculus Rift virtual-reality headset, you’re free to look up, down and around. But as soon as you try to explore the virtual world further, you’re stuck. You can’t interact with your surroundings or walk across the room.
New controllers and sensors hitting the market are built to solve this problem, whether by tracking the precise location of your fingers so you can grab that virtual gun or giving you a simple joystick so you can “walk” from place to place. The HTC Vive, one of the highest-profile new headsets, lets you move around a real room and incorporates your motion into VR.
The startup Occipital thinks there’s a simpler way. Up until today, its candy-bar-shaped Structure Sensor, an accessory for mobile devices, has mostly been used for 3D scanning of physical objects—for instance, in order to create 3D-printable virtual models. Now, though, Occipital wants to expand into virtual and augmented reality by giving its sensor the ability to map entire rooms and incorporate a user’s actual movement onto a screen, and thus into a virtual world.
Mixing Virtual Reality And Reality Reality
At the Occipital office in San Francisco’s Mission Bay neighborhood, I recently rambled around with an Pad in my hands and a Structure Sensor strapped to its back. On its screen, I explored a Portal-esque room in hopes of opening a door to move on to the next level. I noticed a laser crossing the room; blocking it would open the door. But to do so I needed a few of the cubes circulating on a line by the ceiling.
I walked over to a coffee machine in the game, which is called S.T.A.R. Ops, by actually walking down the long row of desks in the Occipital office. I moved through the virtual room in much the same way. I tapped one corner of the screen to grab a coffee cup and moved the tablet away from my body as if I was sticking the cup into the machine. Coffee poured in.
I powered up a nearby gun by tipping the iPad to pour the coffee into a grate. I shot down some cubes and then stacked them in front of the laser, the iPad once again serving as a physical representation of the blocks. The door opened.
It’s a funny mix of the virtual and real worlds. Most virtual reality experiences are seated and don’t incorporate the tipping and reaching motions calls S.T.A.R. Ops calls for. While the movements are fairly intuitive, it takes a while to get used to them. But the learning curve is quick—on my second run through the level, I cut my time by two thirds.
Positional Tracking Gone Wild
The Structure Sensor works by projecting infrared dots across everything in a room. It can sense depth and motion based on the dots’ behavior and build a map of them that updates at 30 frames per second.
A wise man once said that reality is “simply electrical signals interpreted by your brain.” Oculus, the virtual reality harbinger now owned by Facebook, agrees. And it wants you to believe it too, so you can accept virtual reality as a new form of reality.
“VR is more than just another platform,” Oculus chief scientist Michael Abrash said at the F8 conference Thursday. “In the long run it has the potential to create the whole range of human experience. Virtual reality done right truly is reality as far as the observer is concerned.”
Like the series of optical illusions Abrash showcased on stage, virtual reality works because of our brain’s stubborn quest to make sense of the world. Feed a slightly different image into each eye, and it will gladly decide you are seeing depth and motion. It will gladly process virtual images as real, giving you a sense of actual presence.
It’s this model of the world, filtered through our brain’s limited sensors, that we experience as “real” and trust implicitly, Abrash said. It’s a model built by millions of years of human evolution that is based on assumptions that are almost always right. Virtual reality works because it feeds the brain enough matching information that the brain assumes what it is seeing is real. That’s presence.
Merging The Virtual Into Reality
VR is only in the beginning stages. Abrash said over and over that it just just now reached that minimal level or presence. By adding haptics—physical feedback that corresponds to the virtual world—better screens and improved audio, virtual reality can become even more lifelike. The hardware itself will get smaller, lighter and more powerful.
Abrash also talked about bringing the real world into the virtual. For example, you should be able to look down and see your own body. If you want to reach out and grab your coffee, there’s a virtual representation you can pick up without taking off your mask. It sounded like a hybrid form of augmented reality, a different way of experiencing the real world.
Oculus didn’t make any announcements about the long-awaited release of Oculus Rift. Abrash did say it will be “shipping in quantity before long.” Facebook CTO Mike Schroepfer showcased a video game and said, “You’re going to be able to do this this year in VR. You’re going to be doing it in something shipped by Oculus.”
Both Schroepfer and Abrash threw up images of Crescent Bay, the latest publicly-shown Oculus prototype. Schroepfer was quick to clarify on Twitter that he wasn’t talking about an actual Rift release, and never mentioned Gear VR, the mobile headset slated for a broad consumer availability later this year.
In my reality, I’m going to go ahead and envision a 2015 Rift ship date.
Drones and satellites will soar above the earth under Facebook’s plan to bring Internet connectivity to remote corners of the globe.
At the F8 conference Thursday, Facebook CTO Mike Schroepfer revealed images of the company’s first such product: the Aquila, a solar-powered drone with the mass of a small car and a wingspan wider than that of a 737 jetliner.
Facebook acquired five employees from drone startup Ascenta last March. The team built the Zephyr—a drone that could fly for two weeks on solar power alone. With its distinctive U-shape, the Aquila appears to be a direct descendant.
The drone’s development is managed by Facebook’s Connectivity Lab, a part of the company’s Internet.org initiative, which plans to bring Internet connectivity to the several billion people in the world who have never had access. On stage, Schroepfer described the project as an answer to simple economics: Companies spend billions wiring cities, but can’t expect the same return on investment in rural areas. As a result, people in remote regions rely on either limited options or none at all.
“You have to have satellites, drones and other things that don’t require the massive investments in terrestrial infrastructure in order to provide internet access for this world,” Schroepfer said.
Photos by Signe Brewster and Owen Thomas for ReadWrite
If you are interested in sharing a video across the Web, you probably start by uploading it to YouTube or Vimeo. Facebook wants in on that action, and as of today allows videos uploaded to the site to be embedded elsewhere.
During an F8 keynote address Wednesday, product marketing manager Deborah Liu announced anyone can now grab the embed code from their Facebook videos. “This dramatically increases the potential reach of your content,” she said.
But not anyone’s ability to make money from video. Upload a clip to YouTube and you can get paid based on views if you let YouTube place ads in it. There’s no such arrangement on Facebook at the moment, although product management director Fidji Simo said the company is thinking about it.
“We know this is very important,” Simo said. “We have started experiments in that space. It’s very, very early. This is a space where we are going to have to take it slowly because we have to figure out the best possible user experience.”
Facebook is also increasing the size of videos allowed to 1.5 gigabytes and allowing uploads to be resumed after a disruption, such as loss of internet connectivity. Brands can now decide to only show a video to users of a certain age or who live in a specific area, and if they take down a video they can still access its analytics. Videos can be scheduled for posting and takedown with new partners like Socialflow. Other partners allow for direct posting to Facebook or traffic monitoring.
CEO Mark Zuckerberg said in the morning keynote that video will likely be the most common type of content uploaded to Facebook within five years. The company recently bumped video views with features like auto-play and more prominent video showcases on pages. Simo noted Facebook is already seeing 65 percent of video views come from mobile devices, making them a priority for her team.
Oculus might not have a release date for its virtual reality headset yet, but its parent company Facebook is already thinking about how to incorporate virtual reality into its lifeblood: the newsfeed.
CEO Mark Zuckerberg said at the f8 developers conference Wednesday that Facebook plans to add newsfeed support for spherical videos—the 360 degree panoramas necessary to make VR an immersive experience.
Look around where you are right now. Now imagine you’re a camera, your surface studded with many different lenses that capture the scene from all angles. Imagine loading that spherical video into a virtual reality headset, where the wearer has the exact same viewpoint. They can see exactly what you see at this moment. Within a VR headset, they can turn their head from side to side and look up or down, and the view changes as if they are looking around the room.
That’s spherical video, although it’s much cooler to experience than to read about.
Getting Spherical On Facebook
Even outside of a VR headset, Facebook envisions letting newsfeed users pan around in 360 degree videos with a finger swipe or mouse. It’s the same experience, except the user is viewing through a rectangular window on their screen instead of an immersive VR headset. It doesn’t inspire the same feeling of real presence, but it still captures more of a scene than a traditional camera.
“I actually think that video is going to be more engaging (than video games for virtual reality) in a lot of ways,” Zuckerberg said. “This is a new and much more immersive type of content. You’re actually interacting with it and you feel like you’re there.”
The Newsfeed videos showcased on-stage at F8 were shot with multiple sets of spherical camera arrays, a setup that lets the viewer “jump” from side to side to gain additional perspective in addition to simply panning around. In the demo room, I experienced a live feed of the Facebook campus’ Hacker Square. It was shot with six GoPros arrayed in a ball; their video was then stitched together to provide the panorama.
User-created videos, at least in the near term, would be shot from a single location, much like the GoPro ball. That takes away the feeling of being able to step from side to side, but still allows the viewer to look around as if they are standing in place. The camera pictured above, which is built by professional VR camera company Jaunt, is one high-end example of a stationary camera.
Consumer spherical cameras, such as the relatively low-quality Ricoh Theta, currently cost as little as $300. That may drop in coming years as a wave of options from crowdfunding-backed startups and large camera companies hit the market. Eventually, 360 degree cameras could even work their way into our phones. Here’s an example of an image I captured with a Ricoh Theta (which is also capable of shooting video):
Drone copters are starting to get smart enough to take on complex tasks with little input.
The drone startup Matternet, for instance, will release its first product on March 30—a quadcopter capable of carrying a kilogram of cargo up to 20 kilometers (that’s 2.2 pounds over 12.5 miles). The drone, which is a bit larger than the consumer variety, is capable of carrying out autonomous flights at the push of a button.
Using an associated app, you simply select a nearby drone and a destination. Matternet’s software determines the best route and spots any obstacles, such as a no-fly zone around an airport. Then the drone takes off—no further input needed.
Matternet CEO Andreas Raptopoulos sees potential for the drone among on-demand delivery services like Instacart and Postmates, which could use them to quickly send off items to customers. But it could also be used to fly blood, or even a transplant organ, from one hospital to another.
The only thing stopping it currently are FAA rules forbidding drones to be operated out of sight of the pilot. That could change soon if the FAA decides to grant more relaxed guidelines for their operation. (Though that’s very much in question right now.)
The Robots Are Coming! Hurray!
At a SXSW “robot petting zoo,” Matternet’s drone was joined by machines built to respond in the case of a disaster or to take on essential tasks that would otherwise require lots of manpower or expensive equipment.
AirRobot, a sleek, stripped down traditional drone (i.e., an unmanned aerial vehicle resembling a pilotless plane), has responded to 18 out of the 43 disasters at which robots have been utilized by responders. It can fly much lower than a helicopter, giving crews more precise intelligence. It’s been used at train crashes and mudslides to provide live video feeds of disaster. Teams can use the feed to monitor the situation and plan their response.
Halodrop, another drone at the zoo, uses live streams to monitor major infrastructure for damage. Instead of a five person crew working for a week, it can check, say, an entire bridge in three hours.
The most striking bot at the zoo was Muppette, the product of two architects who grew tired of the limited build area on their office’s 3D printer. They began building a 3D printing drone in their spare time with an off-the-shelf drone and sensors. A tube snakes out from the drone’s belly, where it can deposit a concrete mix to build a temporary shelter. In the case of a disaster, it could effect temporary repairs to roads or other essential infrastructure.
There’s no doubt that drones are entering our lives. The question is whether the world will see only the bad, or welcome the good.
Lead photo courtesy of Matternet; other photos by Signe Brewster for ReadWrite
After pushing back deadlines by a few months, the 10 remaining teams in the Tricorder X Prize are nearing the day they will deliver a device that can diagnose 15 diseases and other basic health information through at-home tests. The teams are scheduled to deliver working prototypes in June to a UC-San Diego study that will test the devices on patients with known medical disorders to measure their accuracy.
“We’re pretty confident that the majority of the 10 finalist teams will actually be able to deliver,” senior director Grant Company said. “Some may merge, and some may fall out, just because they can’t pull it together. And that just reinforces how big of a challenge this really is. It’s because the goals are very high.”
The winning “tricorder”—and its competitors—likely have a long FDA approval process ahead of them, which means their consumer release could be years away. But when they do arrive, they will be able to diagnose problems like stroke, anemia and tuberculosis—tasks that have always been reserved for doctors.
Diagnosis: Home Diagnosis
Such devices will arrive at an interesting time in medical history. With the emergence of mobile phones and wearable devices, home diagnostics are poised to explode.
Company said the Apple Watch and affiliated software development, will be a welcome boost for the space.
“I think it’s a good first step, and a useful barometer of what the public’s appetite is for this type of technology,” Company said. “There’s going to be a need of collection and analysis, and these types of tools are going to be absolutely critical. If the masses are able to start building capabilities, using these research kits, it’s the first step toward adoption.”
Apple’s recent backtracking on the watch’s health sensors indicates just how tricky versatile diagnostic devices can be. But even without them in the first edition of the watch, Apple could play a crucial role in driving a central space for health data collection. Imagine if your phone knew your current health statistics, but also had your records and your 23andMe genetics profile.
“Right now it’s just so fragmented. The first step is to centralize all the data collection,” Company said. “The big part is trying to get patients and consumers aware of what’s on the horizon. They’re really going to have an opportunity to be extremely proactive.”
Along with providing users with an immediate snapshot of their current health, your phone could quickly communicate with doctors about the results. Doctors could access an individual’s results or look at trends in large populations. They can make calls on if an office visit is really necessary, or if a two hour visit could be condensed to a quick phone call.
That applies in rural areas or developing nations too, where access to a
When you watch a video in virtual reality, something feels a little bit off. While it’s meant to feel as lifelike as possible, the headset isn’t able to quite perfectly replicate how our eyes look at the real world.
One reason is focus. If you look at something close to you, the horizon blurs. In virtual reality, everything remains clear and sharp. The picture still looks good, but it feels just the slightest bit wrong.
Fove, one of the first startups to go through the Rothenberg Ventures River virtual reality accelerator, has a solution. Its custom headset and software track exactly where your eyes are looking and take action based on that. Right now that means gaming actions such as shooting a gun at a target, but eventually it could be used to set focus in film scenes and make VR a more natural experience.
Making Eye Contact
CEO Yuka Kojima became interested in eye tracking while working at Sony (she was unaffiliated with the Project Morpheus VR headset). She craved more interaction with virtual characters, with whom eye tracking could create eye contact and natural emotional responses. Sony eventually cut her research budget and she moved on to found Fove with CTO Lochlainn Wilson.
At River’s demo space at SXSW, I used the amusingly big Fove headset to shoot down enemy ships in a futuristic city—essentially a high-tech version of Galaga. I simply looked at a ship and laser beams erupted from my eyes. The aim was extremely accurate.
Shooter games probably aren’t the best application for eye tracking technology. It limits strategy and the number of actions you can take, as you can’t look around and continue to fire at your target. But Kojima is absolutely right about interactivity. Imagine prompting a dialogue with a character by simply looking at them.
Last year, Fove debuted a video of a boy who lacked use of his hands playing the piano. Inside a Fove headset, he could simply look at boxes corresponding to the keys to play. The startup is also working with paralyzed medical patients to help them communicate or even move a humanoid avatar.
Virtual reality appeals to us because it mimics the human experience. Anything that can be done to make that feeling even stronger—or restore it—is a worthwhile pursuit.
The desktop 3D printer space is getting crowded. In just a few years, we’ve gone from “Wow! A 3D printer!” to “How cheap is it?” and “How are you going to convince me to buy it?”
That’s why I was surprised to be, well, surprised by the Tiko, a Canadian 3D printer that will hit Kickstarter next week. The Tiko is a delta printer, which means its body is shaped like a triangle instead of a rectangle. It has a sleek, unibody design. And it will cost $179.
In case you haven’t been playing along at home, that’s peanuts in a market where lots of 3D printers will set you back $400 or more.
I saw the Tiko in action at the SXSW conference, where it was hard at work pumping out blue trinkets. CEO Matt Gajkowski said he designed the printer with mass production in mind, meaning every part can be made on standard equipment. That bodes well for crowdfunding backers, who often have to wait months for delayed 3D printer deliveries. The Tiko is slated to ship this fall.
The machine is super light, and designed to simply be picked up when a print is done. The print bed, which is unattached, then remains sitting on the table. The bed is made from a special material that does not need to be prepped with glue or tape to get prints to stick. To remove a completed print from the Tiko, you bend the bed until it pops off—something I have never seen on a 3D printer.
The bed is not heated; heated beds are generally used to keep a print from shrinking during the print process, which can cause warping. Interestingly, the Tiko can print right up to the edges of the bed, making the entire clear area of its body accessible print space. No part of the Tiko’s footprint is wasted.
The printer’s print head, which dangles from three arms in the traditional delta style, has no cooling fan. 3D printers work by running spools of plastic through a heated nozzle, which then oozes out the melted string of plastic layer by layer. A fan makes sure everything cools correctly, but also adds cost and noise. Gajkowski said hollow tubes open to the print head just carry the heat away, so no fan is needed.
Align Those Prints
Tiko’s delta design is also unusual for a printer aimed at the general public. Experienced makers often choose deltas because they tend to be suited to taller prints. Gajkowski argues that more traditional rectangular printers, which usually have a print head on bars that run from the back to front and between the two sides of the printer, can become misaligned more easily, leading to bad looking prints.
Lots of people in the industry would disagree, and it’s definitely true that in the end it depends on the printer. If the Tiko is well made, then it will stay aligned more reliably.
The Project Wing delivery drone Google revealed to the world last August is no more. Instead of going head to head with Amazon’s drone initiative, the Google X team behind the UAV (unmanned aerial vehicle) decided to scrap its plans and pursue a different design.
Google’s “Captain of Moonshots” Astro Teller, who leads its secretive X lab, revealed to a South By Southwest crowd Tuesday how the decision came about. It began with CEO Larry Page giving his team a tough mission: He wanted a prototype that could deliver goods to non-Google employees within 5 months. Teller and his crew only had one design that they could build that quickly, and even though they knew it wouldn’t be right, they went ahead with it anyway.
It might sound odd to proceed with a project that was destined to fail, but Teller considers failure an integral part of the process at Google X. In fact, he says it’s a key ingredient for success.
At Page’s behest, the team continued work on the doomed Project Wing delivery drone for 5 months. Once the term was up, the team had a clearer vantage point of the work at hand. They abandoned the version that debuted last year and began work on a totally new design that reflects everything learned from that previous experience. (Google X promises to say more about the new drone’s design later this year.)
Project Wing isn’t the only failure Teller wants the world to know about. Another recent, and much publicized, example is Google Glass, the smart eyewear Google sold to a group of “explorers.” The company later abandoned its original plans before the facegear could reach the general public. Though not quite dead, the device is no longer a Google X project. It will live out its second act under the umbrella of Nest co-founder and recent Google addition Tony Fadell.
“The thing that we did not do well, that was closer to a failure,” Teller said, “was that we allowed and sometimes even encouraged too much attention for the program.” In other words, instead of clearly classifying the product as an early prototype, the company practically hyped and marketed it as a finished product. That was at odds with the whole point of the Explorer program, which essentially placed the headsets with beta testers.
Keeping Failure—And Success—Afloat
While Glass got a lot of high-pressure, high-profile attention, many of Google X’s failures are small. In those cases, they can work like the essential baby steps that can refine a product while research and development are in motion.
For instance, Google managed to improve its Project Loon initiative, which aims to provide Internet to remote parts of the world, by increasing the life of the balloons. The company figured out how to keep them aloft
In the year since cinematic virtual reality startup Jaunt launched, the world of VR has once again turned upside down. There are new headsets, new industries interested and, finally, definite plans to release it to consumers. The race is on to determine which content will define the world’s first experiences with the medium.
Jaunt CEO Jens Christenson can’t help but get excited about all the activity and identify one of its greatest opportunities:
I think technology has gotten to the point [where] people are excited about it. They’re going to get the headsets, especially mobile headsets. The biggest overarching thing is just creating enough content so we can have what I call critical mass of content in 2015. So when people get their headsets, there is great content they can view, but also fresh content coming every day, every week, so they keep coming back.
At the South By Southwest festival, Christensen had a fresh reel of content to show me, including aerial shots of rock climbers and an on-stage view of a Paul McCartney concert. Jaunt’s custom camera captures 360 degrees of 3D video that pulls on the emotional strings that make virtual reality feel so real.
VR Films Are No Laughing Matter
It’s clear that Jaunt is beginning to experiment with new forms of video. Back when the company was still in stealth mode, its clips felt mostly like home movies—simple shots of a children’s choir, BMX bikers and a tranquil yard. There was one experimental scene from a horror film.
Today, the production value feels much higher. In one scene from “Other Space,” the upcoming Yahoo comedy by the creator of “Freaks and Geeks,” characters lobbed mildly humorous insults and got in my face while examining me aboard their spaceship.