Virtual Retail Showrooms

I’m not a big watch person. I bought my current one on Amazon for about $35. It gets me through the day, tells the time, and looks decent. That’s all I need.

But for people who are really into watches, and are making a hefty purchase online, the last thing you want to do is buy something that doesn’t end up fitting properly or looks completely different than the photos. Insert the company Chrono24. They take 3D modeling software of each watch they have in their catalog, and can superimpose the watch (with exact dimensions) onto your wrist.

I personally think it’s a little ridiculous, but hey, if this is something that people actually use, it’s a great idea. Who knows, maybe online clothing stores will start having virtual app try-ons. It’s uncharted territory for the most part, so it could be a new big thing sooner thank we think.

Watch the video below if you want to know more:

Generative Design

3D printing took the world by storm a few years ago, and designers haven’t looked back. In fact, the possibilities have started unfolding in many new ways. Tamu, a design company overseas, created the world’s “most optimized folding chair, which takes up less space and the least amount of material possible to make.”

So how do they do it? A thing called Generative Design. Designers put parameters into the computer, which takes those simple points and fills in between the dots. You can see the rudimentary physical model the design team created below, alongside the corresponding digital model they input.

From that minimal, planar model, the designers then have the capability to interpret the space between each hard point. Those hard points won’t change, so the structure will be kept intact. But the space of each plane has a lot of wiggle room. For example:

Look at all of the unique data that the computer can come up with. Those webs still create a structurally sound piece of furniture, but by thinking outside of the box (pun intended) the program is able to warp the planes into more hollow spaces. Resulting in the masterpiece we see here:

I mean, look how compact and flat it is! And it’s visually stunning. Amazing how designers can use new technology to create something so unique. I wish I knew more about Generative Design so I could give you specific details, but it’s still over my head. Here’s a video I had seen a couple of years ago when it had first launched. Enjoy!

Designing Around the Human That Can’t Put Their Phone Down

Related image
@nworeport

This is probably one of the most fascinating and infuriating designs I’ve seen in a long time. People have become so ingrained in their phones – even whilst walking across a busy street – that a Dutch town has started putting in pedestrian stop lights. No, it is not the same as the universal red blinking hand and white person walking signaling we see everywhere (like the photo above). It is even more obnoxious than that.

Several cities across Europe have started to adopt in-the-ground lighting that coincide with the traffic lights above. Here are some visuals:

So, why are we starting to do this? Well, turns out, people – might I add those who are not the brightest light bulbs in the box – have started walking across intersections while looking down at their phones. Like I said…not the brightest.

Designers being the cool people that they are, have now solved for this problem by putting lights below. Now, lazy people who don’t want to look up from their devices, can walk safely across the road without even having to try. How amazing! Let’s stop teaching our kids we have to look both ways before crossing the street. Brilliant.

As you can tell, I’m not super happy about this. This is a great example of ingenious design, and yet, people are balking. Several boards for these cities have said this is merely “rewarding bad behavior.” And I’d have to agree. I listened to a podcast the other night that had discussed the benefits and possible downfalls of artificial intelligence in the future. One of the researchers had said people often fear the one on one experience individuals will have with robots. But what he is concerned about, is the interactions people will have with other people once they’ve interacted with AI.

For example, he mentioned children talking to Siri/Alexa/Google in an authoritative tone without using pleasantries. “Siri, play me this song.” “Alexa, remind me to do this tomorrow.” “Google, tell me what ____ is.” All without one please or thank you. Children are (hopefully) taught at a young age to use pleasantries because it’s the right way to treat people. It’s polite. What starts to happen when children get what they want from AI by being rude? Will they start to be rude to other kids on the playground, bossing them around and hoping for obedient results? “Suzy, give me the ball.” Kids are ruthless enough as it is.

To be fair, I had never thought about this so poignantly. We see robots revolt in action movies after mistreatment and humans kill off robots after they’ve gotten too powerful. All of that is fairly black and white, and physical too. Easy to digest and predict. But what I’ve failed to see fleshed out, is this nuanced ripping of our social fabric like this scientist theorizes.

Now, I’m not suggesting this pedestrian lighting initiative is tearing apart the way of life. But I am suggesting people should look up from their damn phones because that’s gonna rip apart social fabric real quick. It’s already begun.

AI Could Take over Creative Jobs too

NVIDIA’s New AI Tool Transforms Simple Scribbles Into Realistic Landscape Images
@digital.information.world

I’ve been off on blogging for the past few days. This week I had my birthday, and everything with school and life seemed to happen all at once. It’s funny how that always tends to happen. Regardless, during my time away yesterday, a new technology was dropped. And it’s changing everything.

I’m not even trying to be dramatic here. I’m actually emphasizing the importance of this new development to its fullest capacity. The tech company NVIDIA has launched an Artificial Intelligence (AI) software that takes simple abstract art, and creates images from nothing. Now, obviously the images are composited from somewhere, but the craziest thing about it, is that the technology only references the internet image collections. The actual creation of the imagery is done through its own pixel distribution – AI is now creating the way an artist would.

Everyone has been concerned about robots and AI taking over the mundane jobs of the economy (truck-driving, food prep, delivery, healthcare, etc.), but what we all thought was untouchable are the creative fields. Art and Design has always been held at a prestigious level throughout history. Not everyone can be creative, so that innate talent is hard to come by. We’ve seen an increase in the amount of jobs that are being created in the art world, and the stress being placed on the creator’s importance in all aspects of business.

So, this new technology seems to be upsetting all of our preconceived notions regarding AI’s lack of sensitivity to art. Here’s a short video that briefly describes and demonstrates its abilities:

As a designer, this is unbelievably fascinating. I don’t know anything about coding, but I can only image the amount of time and craft that has been put into this programming. However, when the VP, Bryan Catanzaro speaks about “everyone becoming an artist,” I start to become wary. He states right after that, that he hopes this technology will actually aid artists and designers in their new creations. I don’t know how I feel about this just yet. Sure, it’s a program created by artists, for artists. But where does this technology stop? Will AI eventually make the artist obsolete? I don’t think so; but it’s difficult to understand just how soon this program could eat up photographers, digital painters, etc.

I was naive to think the creative fields would remain untouched by such a technology. The future is upon us, and it’s becoming even more difficult to predict.

Delivery Robots could be the cutest new tech idea of 2019

Image result for amazon robot delivery
@amazon

I love Amazon. I’ve used Prime for almost 6 years now, and even after my half-price student discount ends in the near future, I will continue to use it. A teacher of mine who is an avid book worm (he owns thousands of books, and I mean, thousands) told our class he saved almost $5,000 in the first year since getting his Prime account.

The mailing system in our country (and the world for that matter) still amazes me to this day. Yes, certain packages can take weeks to be delivered, and we’ve all had something disappear in the mail at one point or another. But 9 times out of 10, it’s within a few days (in the case of Prime, only 2!) that something – even from across the country – gets to your door.

When you think about how the first few years of the postal service were serviced with horse and buggy, it’s mind-boggling to think of the millions of letters, parcels, packages, and more that are all delivered each day. And not to mention the data the average consumer can get their hands on nowadays. With the Delta app, I get a notification when my suitcase is loaded onto the plane. I know that’s a little different than your average delivery, but with a simple tracking number, I can see one of my packages being delivered tomorrow only has 12 stops until it gets to my apartment building. How is that not crazy incredible?!

I hear people complain about having a shipping confirmation tell them it’ll take 3 days for their order to get to them. 3 DAYS?? C’mon that’s nothing. I understand there are things you can’t wait for, or maybe even need ASAP, but 3 days. The amount of infrastructure, organization of people and places, etc. to get you that fidget spinner you have. to. have. right. this. minute. Give me a break.

So, after this brief gush about one of the modern marvels of the world, let’s insert the latest of the greatest technology (that’s not the saying, but oh well). Amazon is now testing delivery bots. Now, they are completely adorable. And quite honestly I’d love to get a delivery from one of these lil guys; but they do look pretty…dopy. Here’s a short video:

Amazon just started testing these buggers about a month ago. But, Door Dash – a food delivery service from restaurant to home – has been doing research and development of this since early 2017. I honestly haven’t heard much about it, even though this Buzzfeed video had more than 8.7 million views. Guess I’m out of the loop. This short video actually shows the articulation of the three wheels (it’s actually pretty wild!) over curbs, and the camera and sensor systems in place.

Now, my main critique of such a service was actually the possibility of theft. Not necessarily the food (although that would be terribly unfortunate for any hungry person waiting anxiously) but rather the bot itself. One of the Door Dash techs said that they are coming up with ways to deter the thefts however. So the video above kind of debunks my short-lived theory.

I will say, stealing and damaging property were some of the things I thought would ultimately destroy the city scooter phenomenon (back in Portland last year, and now Detroit). But all of the scooter companies seem to be doing great, with most, if not all, of their assets still up and running to this day.

I have to say, I’m kind of excited to see what happens with this. The next revolution of delivery is heating up people! If only Paul Revere could see us now.

The Uncanny Valley

Think of a robot. Now describe it.

What is it wearing, if anything?

Is it all metal? Does it have a human, animal, or extraterrestrial-like face?

Does it walk on all fours? Or stand up like us?

Is it smiling? Or seem angry?

There aren’t “right” or “wrong” answers here, and not everyone will answer the same way. Because not all robots look alike. Wow, thanks Sydney, for stating the obvious over here. But…have you ever thought about why some robots are cool, and then others seem way too creepy? Welcome, The Uncanny Valley.

Scientists have (somewhat) figured out why the phenomenon of almost-too-realistic-human-like robots makes our skin crawl.

“I think the key is that when you make appearances humanlike, you raise expectations for the brain. When those expectations are not met, then you have the problem in the brain.”

Ayse Saygin

The Uncanny Valley is met when the robot (almost identically human in characteristic) tries, but fails to mimic a real human. Saygin says things like shoddy eye contact or jerky movements are usually dead giveaways. Our brains instinctively and unconsciously pick up on these unnatural movements, sometimes before we even consciously realize. Even when we know something is off, it’s hard to tell what exactly is making us so uneasy.

Some people in the field say it’s a good thing we’re able to have this ability to sense when something is non-human. A self-preservation intuition, as robots become more commonplace, this Uncanny Valley will “prove itself crucial as humanlike robots or virtual companions enter homes and businesses in coming years.”

A friend of mine is doing a project surrounding robots and mobility products in children’s hospitals. I had suggested doing facial charts on what we find charming, tolerable, and downright chilling. I’m placing a few photos below. See if you can spot the differences between them. The future is quickly approaching, and how will designers be able to fully integrate AI and robotics with us seamlessly? And do we really need, or want that?