NOTE: This post is part of my Machine Learning Series where I discuss how AI/ML works and how it has evolved over the last few decades.
Autoencoders are a type of neural network architecture used for tasks such as dimensionality reduction, feature extraction, and data denoising. With their ability to learn efficient representations of data, autoencoders have found applications in various fields, from image processing to anomaly detection. In this post, we'll explore the structure and functionality of autoencoders and delve into their use cases.
NOTE: This post is part of my Machine Learning Series where I discuss how AI/ML works and how it has evolved over the last few decades.
Recurrent Neural Networks (RNNs) are a class of neural networks designed to handle sequential data. Whether it's analyzing time series, understanding natural language, or predicting stock prices, RNNs are powerful tools for capturing temporal dependencies in data. In this post, we'll delve into the structure of RNNs, how they process sequences, and their practical applications.
NOTE: This post is part of my Machine Learning Series where I discuss how AI/ML works and how it has evolved over the last few decades.
Convolutional Neural Networks (CNNs) have become the go-to architecture for image recognition and computer vision tasks. CNNs excel at identifying patterns in images, such as edges, textures, and shapes, making them a key player in applications like image classification, object detection, and facial recognition. In this post, we'll explore the key components of CNNs, how they operate on images, and their use cases.
NOTE: This post is part of my Machine Learning Series where I’m discussing how AI/ML works and how it has evolved over the last few decades.
Feedforward Neural Networks (FNNs), also known as Multi-Layer Perceptrons (MLPs), are one of the most fundamental and widely-used neural network architectures in machine learning. FNNs have been employed for a variety of tasks, including classification, regression, and feature extraction. In this post, we'll explore the architecture, training process, and applications of FNNs.
NOTE: This post is part of my Machine Learning Series where I’m discussing how AI/ML works and how it has evolved over the last few decades.
Neural networks are the foundation of many artificial intelligence and machine learning applications. There are several types of neural networks, each designed to address specific types of problems. In this post, we'll explore the most common types of neural networks and their applications.
NOTE: This post is part of my Machine Learning Series where I’m discussing how AI/ML works and how it has evolved over the last few decades.
One of the most transformative developments in the field of artificial intelligence and machine learning was the advent of neural networks. These computational models are designed to mimic the way the human brain processes information and are capable of performing complex tasks such as image recognition, natural language processing, and more. In this blog post, we'll explore what neural networks are, their components, and why specialized hardware like GPUs (Graphics Processing Units) and TPUs (Tensor Processing Units) are highly effective for training and deploying neural networks.
NOTE: This post is part of my Machine Learning Series where I’m discussing how AI/ML works and how it has evolved over the last few decades.
Computer vision, the field of AI that enables computers to interpret and understand visual information from the world, has undergone significant advancements over the past decade. The ability to analyze images and videos, recognize objects, and understand visual scenes has opened up a multitude of applications in fields such as healthcare, autonomous vehicles, and security. In this blog post, we will explore the key milestones and breakthroughs that have shaped the evolution of computer vision over the last ten years.
NOTE: This post is part of my Machine Learning Series where I’m discussing how AI/ML works and how it has evolved over the last few decades.
Machine learning has become an integral part of our lives, powering applications from voice assistants to self-driving cars. However, the field has a rich history that spans over five decades, with foundational ideas that date back even further. In this blog post, we'll explore the key milestones and breakthroughs in the history of machine learning over the last 50 years and how they've shaped the field as we know it today.
Machine learning is an exciting and rapidly evolving field that has the potential to transform virtually every industry. From natural language processing to computer vision, machine learning models are becoming an integral part of our daily lives, enabling new levels of automation and understanding. To explore the fascinating world of machine learning and share insights with a broader audience, I am launching a blog series on AI/ML.
In this post, I will discuss the topics I will be covering and what you can expect from the upcoming blog series.
Art critics have been present long before the birth of photography and have accompanied photographers through the journey from analog to digital. Now, with the proliferation of machine learning and the integration of on-device ML chips, such as Apple's Neural Engine chip, your smartphone has evolved into a discerning critic of your photographic creations.
As a family, we love to explore new places, and a month ago we took a trip to Disneyland for my birthday. It was my first time visiting the happiest place on earth with my 8-month-old son, Wesley. Needless to say, it was an experience I will never forget.
As we navigate the digital world, we often come across articles we don't have time to read but still want to save for later. One way to accomplish this is by using the Read Later feature in Apple News. But what if you want to access those articles outside the Apple News app, such as on a different device or with someone who doesn't use Apple News? Or what if you want to automatically post links to those articles on your blog? That's where the nerd powers come in.
In 2008, I had the opportunity to tour SPAWAR, the Space and Naval Warfare Systems Command, now known as NAVWAR. SPA/NAVWAR is a research and development laboratory for the U.S. Navy. During my visit, I was fascinated by the various autonomous military robots that were being developed and tested there. I photographed the tour and wrote about it for WIRED News.
Fast-forward to 2023, and with the emergence of large language models like ChatGPT and Bing AI, it's possible to imagine how these robots could be controlled using AI in ways that are frankly somewhat terrifying. With great power comes great responsibility, and we must consider the potential risks of relying on AI-powered machines in warfare.
On the night of July 4th, 2018, I was in my apartment at The Vermont in Koreatown when illegal firework mortars started exploding right outside my window on the 20th floor. The massive amount of explosive materials being set off on the street below was almost too much for my apartment to handle. My cats were less than impressed with the spectacle, and I can't blame them. Despite the danger, I couldn't resist capturing the colorful chaos with my camera. And let's just say, it was an explosive experience!
The Deep Space Network (DSN) is one of the most critical components of space exploration and communication. The network comprises a series of antennas that are used to communicate with interplanetary spacecrafts, such as the Mars rovers and the Voyager spacecrafts, as they travel through the solar system.
Have you ever stumbled upon a photo and found yourself wondering who that celebrity is, or what kind of object is in the background? Enter Amazon Rekognition, a powerful deep learning-based image and video analysis service that can help answer those questions.
Have you ever wanted to caption your photos automatically? With the GPT-3 Davinci model from OpenAI, you can do just that! By using image keywords, people, locations, and the album name, you can use AI/ML to generate captions that are not only descriptive, but also entertaining (and frequently hilariously wrong).
In this post, I’ll explore the capabilities of GPT-3 for writing captions based on image data, and how it can add a new dimension to your photos.
I have a massive archive of photos from the last 20+ years of shooting. I have hundreds of shoots and hundreds of thousands of images that I have never published because it’s so time-consuming to sort through them, choose the best shots, caption, and tag them.
It's been over a decade since I last wrote a blog post, but I'm excited to be back. I started blogging in 1998, and it's been a wild journey ever since. So much has changed in my life, and I'm eager to share it all with you.
I shot another time lapse yesterday, this one went for nearly a full 24 hour period. I'm starting to get the hang of this whole time lapse thing. This time I shot 3 bracketed frames during the day every 60 seconds. Then when the sun set I just went to single exposures. My favorite part is when the sun sets and reflects off a glass building. Make sure you view this in HD, preferable 1080p full screen:
Last weekend, Penelope and I took our second Secret Stairs hike together. This time we hiked through the hills of Silver Lake. I fell in love with the neighborhood, it was so beautiful. If we don't end up buying a house in the Pasadena area, we will likely buy in Silver Lake. Here are a few photos from the hike:
This weekend I marveled at the beautiful clouds above Downtown Los Angeles. I decided to shoot an HDR time lapse using my 5D Mark II. Here are the results, make sure you select at least 720p, 1080p for full screen HD goodness:
Last weekend Penelope and I went on an interesting urban hike through a series of hidden staircases in Montecito Heights. We found the hike in an awesome book gifted to us by Pen's mom called Secret Stairs: A walking Guide to the Historic Staircases of Los Angeles by Charles Fleming. The book is packed with hikes in different LA neighborhoods that take the reader up and down a series of staircases hidden away in plain sight.
We really enjoyed our first hike through Montecito Heights. I dragged along my camera and tripod and made some HDR photographs:
Last month Penelope and I took another excellent trip with the Desert Explorers. During the adventure we explored various parts of Anza Borrego and saw an ample assortment of wildflowers. I've never captured so many different species in one outing. I was very impressed with the range of flora throughout the park. Amazing:
A few months ago, Penelope and I took a trip to San Francisco. At one point we went up to the top of Coit Tower, something I hadn't done since I was a child. I wanted to shoot some HDR photos from up there, but they don't allow tripods. Luckily I was able to brace the camera against the glass and got some steady shots:
For our semi-annual vacation this year, Penelope and I went to Death Valley, Tahoe and San Francisco. I posted the Death Valley wildflower photos a few weeks ago. Here are some of the landscape shots I captured in Death Valley.
I went out on my balcony to check out the sunset and noticed a couple of people with a large-format camera in the parking lot. Turns out one of them was a photography student. I thought it would make for an interesting metaphoto:
Last year my photos ended up on Wired, Boingboing and URB. This year I'll be shooting Coachella again, look out for my photos on here, Wired and possibly a few other places around the web. More photos from my coverage last year below and after the jump:
Over my vacation a few weeks ago I spent a few hours photographing wildflowers in Death Valley. I found 21 different species in bloom in 3 separate areas of the park. The first area, near Ashford Mill had mostly Desert Gold and Sand Verbenas. The second area, off Green Valley Road, where we camped and which received a good amount of rain (on February 25th-26th) had Cryptantha, Woolystars and Fiddlenecks. Finally the big score came on my birthday on the 27th just below Jubilee pass on the way to Shoshone.
The ground by the road was covered with over a dozen different species of wildflowers. I photographed many flowers I had never seen before. It was really quite beautiful. Hopefully the rain will keep up and more flowers will pop up throughout the desert. There is something really magical about finding wildflowers carpeting the ground in one of the harshest desert environments on the planet.
Below are some of my favorites from the trip. Read on to see each of the 21 different Death Valley wildflowers I photographed.
This weekend Penelope and I went trained with our Search and Rescue team out in Anza Borrego. We competed in an orienteering race and did an ELT search. I came in 4th in my division (Orange) out of 10 people. During a slow point in the mission I shot some photos of the amazing views at Buttes Pass in Anza Borrego:
This weekend my lovely wife Penelope and I hiked up to Mt. Lee in Griffith Park. Mt. Lee is also the peak that the Hollywood Sign sits on. You can't quite get to the sign as it's fenced off and marked "No Trespassing", but you can get above it.
The route we took started at the Camp Hollywoodland parking lot and was about 6.5 miles round trip. The hike took us about two and a half hours with many stops for photos.
The company I work for, Cartifact, has an interactive map of Griffith Park you can check out. The map is quite detailed. We're working on an updated version along with some cool new ways to view it.
The LA sky was amazingly clear and smog-free that day. In the photos below you can clearly see Catalina in the distance:
Recently I joined a hackerspace in Downtown LA called Null Space Labs. What is a hackerspace you ask? A hackerspace is a communal workshop where folks can work on electronics, programming and basically whatever tech stuff they're interested in. NSL was started by a group of people from the local computer security (hacking) scene earlier this year.
Here is the description from the website:
Null Space Labs, a hackerspace in downtown Los Angeles a place for people who do interesting things with tech.
We offer wifi, coworking space, an electronics and hardware lab with soldering stations and rework equipment, a small wet lab, simple wood and metal working tools, public computers, and most of all a creative environment that's open to visitors.
Fields of interest of people you might find at the lab include DIY electronics, hardware hacking, lock picking, game development, entrepreneurship, security, graphics programming, AI, photography, privacy and civil rights, etc....
The group that operates Null Space Labs sees itself solely as an infrastructure provider and exerts little influence over projects and events carried out at the lab. We are trying to be financially independent, and finance our operations through membership fees. The space was opened in May 2010.
I joined NSL a few months ago, and this month I took the plunge and became a keyholder, granting me access whenever I feel like working on my projects. The space is great, there are tons of really knowledgeable people who are always more than willing to assist you with pretty much anything related to electronics, microcontrollers, hardware hacking, network security, and more.
The members of NSL are working on a plethora of interesting projects. You can read all about them on the wiki, but here is a selection of some that are particularly interesting:
We have a ton of great equipment for use by members and non-members alike including over a dozen Metcal soldering stations, hot-air and plate rework equipment, oscilloscopes, function generators, a PCB CNC machine, stereo microscopes and much more. We frequently do group buys on parts and PCBs. We also have a large collection of part in house, available for use in your project (donations appreciated).
If you're in the neighborhood, come by and check out our space. If you want to learn about electronics and soldering we have a fun board you can put together in an hour or two if you're new to SMD soldering. You can tell if we're in by looking at this wiki page or by following the NSL Status twitter stream. Here is our address:
Texas Instruments recently came out with a fun and powerful development robot based on the Stellaris LM3S9B92 microcontroller. The robot, known as the Stellaris Evalbot, is packed with tons of functionality that leverages the LN2S9B92's robust feature set. The Evalbot comes pre-assembled, with the exception of the wheels and bump arms which take just a few minutes to put together.
First of all, let's talk about the function-rich microcontroller at the heart of the Evalbot: the Stellaris LM3S9B92. The Stellaris, created by Luminary Micro (acquired in 2009 by Texas Instruments) is a 32-bit ARM Cortex-M3 MCU which runs at speeds up to 80Mhz. It sports a wide array of features including:
256 kB flash and 96 kB SRAM
32 Channel DMA
32-bit external peripheral interface
ROM preloaded with a boot loader, AES and CRC functionality
10/100 Ethernet MAC/PHY
2 CAN controllers
USB 2.0 Full Speed OTG/Host/Device
2 SSI / SPI controllers
2 I2C interfaces
I2S interface
3 UARTs
8 motion-control PWM outputs with dead-band
2 quadrature encoder inputs
4 fault protection inputs
3 analog comparators
16 channel 10-bit ADC
16 digital comparators
24-bit systick timer
4 32-bit or 8 16-bit timers
2 watchdog timers
Low drop-out voltage regulator
Up to 65 GPIOs
The Evalbot is the perfect platform for learning about and developing for the LM3S9B92. It takes advantage of nearly every feature included in the Stellaris MCU. The Evalbot is both battery and USB powered, and automatically switches when plugged in to a computer. It features a collection of analog and digital peripherals along with a large amount of breakout pads and headers for I/O expansion. The Evalbot includes:
MicroSD card connector
USB Host and Device connectors
I2S audio codec and speaker
RJ45 Ethernet connector
Bright 96 x 16 blue OLED display
On-board In-Circuit Debug Interface (ICDI)
Wireless communication expansion port
Two DC gear-motors provide drive and steering
Opto-sensors detect wheel rotation with 45° resolution
Sensors for bump detection
The Evalbot comes preloaded with the μC/OS-III real-time kernel. The Evalbot includes a time-limited version of the IDE (from IAR) you will need to get started programming the bot. Also included is the source code for the Evalbot and some handy in-circuit debugging tools. It's fairly easy to get set up, but runs on Windows only. I was able to flash a modified version of the firmware after just a few minutes of tinkering. My only complaint is that the software is quite expensive to purchase once the trial period runs out.
The Evalbot retails for $149 for the robot by itself or $200 for the robot and a book about programming the μC/OS-III real-time kernel. If you're looking to learn more about real-time systems and play with a powerful microprocessor I highly recommend the Evalbot. As I mentioned in the headline, I have five Evalbots to giveaway,click here for more info about the giveaway.
Giveaway Info:
Texas Instruments was generous enough to send me five Evalbots to give away. I will be drawing drew names from a hat on Black Friday, November 26th. To be entered in the drawing you must [have] meet the following requirements:
Have a project idea for the Evalbot
Be a paying member of a hackerspace
Be willing to share photos and/or a brief writeup once you have completed your project
Be a US resident (I have to ship these on my own dime)
Post a comment with your project idea and hackerspace affiliation below
To be entered in the drawing, post a comment below describing your project idea. Don't forget to mention which hackerspace you belong to.
A few years ago I toured the U.S. Navy's Space and Naval Warfare Systems Center Robotics Lab in San Diego. I shot photos and wrote a piece for Wired about my experience there. What follows are some out-takes along with high-res versions of many of the shots in the piece. Autonomous military robots... what could go wrong?
SAN DIEGO -- The Navy's MDARS-E is an armed robot that can track anything that moves. Told that I was the target, the unmanned vehicle trained its guns on me and ordered, "Stay where you are," in an intimidating robot voice. And yes, it was frightening.
Perched atop a strip of cliffs lining a beautiful section of the Pacific Ocean, the Space and Naval Warfare System Command in San Diego develops semiautonomous armed robots for use in combat by the U.S. military. "We're not building Skynet" says Bart Everett, the technical director for robotics at SPAWAR. Though Everett assured me that the use of the robots' on-board weapons is under the strict control of their operators, the lab's bots can navigate and map complicated terrain, work cooperatively with soldiers and identify and confront hostile targets. Sure, they're no Johnny Five, but robots with guns are both creepy and fascinating.
ROBART III is a prototype platform designed in-house at SPAWAR. If it weren't for the chain gun and missiles, he would be pretty cute. Once he's ready for battle he'll almost certainly don an evil-looking suit of armor. ROBART's sensor array consists of a multitude of cameras, SICK LIDAR (like radar, but with lasers), ultrasonic transducers (gold spots), passive IR (infrared radiation) detectors and more. The weapons are planned to work in unison with a special rifle that would automatically target where a soldier points his weapon.
One of ROBART III's weapon systems is this nonlethal pneumatic chain gun. It uses a combination of laser sighting and machine vision to lock in on its target and barrages it with a torrent of 3/16-inch-diameter projectiles. In tests, plastic pellets (like air-soft munitions) and steel darts were used.
This prototype robotic weapon platform is designed to be buried underground for camouflaged deployment. When called to action, the robotic gun pops up and starts shooting. If you're the unlucky soul on the business end of this gun, it's likely curtains for you -- this robot is an extremely accurate shooter. A high-tech night-vision scope permits dead-on targeting even during moonless nights.
This Saturday marks the opening of my third solo show. The show is at Indigo Gallery in Pomona. The work consists of an eclectic assortment of photos from my many Downtown LA Walkabouts. There are many other galleries open during the Pomona Art Walk, which brings a large group of people to downtown Pomona. The show runs until December 25th, so you have plenty of time to check it out. This Saturday is the opening reception. Here are the details:
Dave Bullock L.A. Walkabouts
Indigo Gallery
558 W. Second Street
Pomona, CA 91766
Opening Reception - Saturday, Nov. 13th 6-10pm
Last Saturday Reception - Saturday, Nov. 27th 6-9pm
Roughly one year ago a huge fire tore through Southern California burning over 160,000 acres of forestland. When it first started, I took photos of the Station Fire blazing through our local mountains.
A few weeks ago, and almost exactly a year after the Station Fire was extinguished, Penelope and I went for a drive up Angeles Crest Highway. The route we usually take, through Pasadena/La Cañada is still closed, but we were able to access the forest going in the back way.
The bad news is, much of the forest has been burnt to a crisp. The good news is that there is life everywhere and the forest will return eventually. Here are a few photos I shot of the forest coming back to life:
A few years back I toured NASA's Goldstone facility for Wired. Goldstone is a node on the Deep Space Network. Basically, it's a collection or gargantuan antennas in the Mojave desert. NASA uses the antennas to talk to various satellites, rovers, probes and other space-based devices it rockets out of our atmosphere. I just posted a photo gallery of high-res images, many of them never published before. Check out my photos of Goldstone, NASA's Deep Space Network.
This weekend I quietly rolled out the latest version of this website. I completely redesigned and reprogrammed it and along the way I added a ton of new features.
Over the years I have released several iterations of eecue.com with various feature enhancements along the way. This release is my most ambitious yet. Some of the highlights include much bigger images, print ordering with greatly reduced prices, enhanced navigation and tagging and much more.
Gesture and keyboard navigation. When you're viewing images or blog posts, try using your arrow keys to navigate through the images. Fun stuff. On iPads/iPhones/other iOS based devices swiping left and right accomplishes the same function.
Replaced missing flickr image embeds in my blog with local versions of said images.
1024px wide interface for iPad glory.
Removed all advertising from entire site. Please buy prints to help offset my hosting cost!
There is more to come, I still have a bunch of features I will be adding in my free time over the next few weeks. In addition to those features I have tons of photos to post and dozens of blog posts that I've been meaning to write.
Let me know if you have any issues with the site in the comments below or by emailing me directly.