Getting Beyond Basic

(This article originally ran in February of this past year. Dan Blanchette is taking December off. New articles will appear in January 2019.)

Back in the day when computers went beyond having dot matrix printing, “desktop publishing” came into being. Apple, with its Laserwriter printer, and Aldus, with its PageMaker software for page layout, set the stage for the advent of an entirely new way to print on a small business scale. This was in the mid-to-late 80s, and almost without notice typesetting houses across the country were on the ropes.

Now, all of a sudden, we were the typesetters. Typesetting had its own set of rules: flush-left, flush-right and justified—of course. But things like hanging punctuation and drop caps? How do you set those? We had to learn. But step one…

PageMaker, and applications such as Microsoft Word, came equipped with a standard set of typefaces (known in computerese as “fonts”) to give documents a professional look. Usually the limited set numbered close to 12 or 15 fonts, and included roman sans-serif typefaces like Helvetica, maybe one italic, a few monospaced styles like Monaco or Geneva, maybe a calligraphic face such as Apple Chancery, and a few serif faces like Palatino or Times. Designers already knew things like kerning and proper letterspacing, but non-designers stepping into this new realm took things as they came out of the box.

And one of the obvious things everyone had to deal with was the limitations imposed by those packaged fonts because initially they were the only ones available. Soon fonts of all styles and weights became available, at a price.

Adobe came out with scalable PostScript fonts, which had to be downloaded and installed into the printers’ ROM (read-only memory). But that made everything work, and we were able to have the luxury of things like WYSIWYG (what you see is what you get) menus of those fonts. All this is now ancient history, the way computers “talk” to printers to print exactly what you see on the monitor’s screen. But back then, it was a revelation.

The reason I bring all this up today is that there’s a ton of those desktop publishers still out there who haven’t yet moved beyond using the basic set of fonts that came with their computer’s software. Of course, designers know better. But this column is for everyone who may want to enrich his/her knowledge and scope of seeing design and how things work in a graphic manner. One of the tools of design is typography. And knowing when to use a specific font for a specific design feel is the beginning of good graphic design thinking.

I came across one such circumstance just a few weeks ago: I agreed to help out a relative with a choral publication, which had been done for years by a non-designer who no longer wanted to tackle it. Last year’s booklet was done entirely with that basic set of fonts from around 1994.

I once worked for a company in 2005 whose entire type collection amounted to maybe thirty fonts. It was all I could do to try to expand that.

Type as a tool in design makes for messages in copy to become expressive through the good and varied choices we have in fonts. We have all kinds of fonts—thousands of them—to choose from in making those messages come to life on the printed page and in websites. Just go on the web and search for them. There are many free just for the download.

If you do that, be careful, because some of those free fonts are not complete fonts, meaning that they do not contain all of the symbols and characters with diacritical marks that a standard font should have. But you may find, for your personal use, a few really good fonts that can make your documents come alive, be it for announcements for friends and family, anniversaries, wedding and birth announcements, Christmas cards, what have you.

The thing I’m trying to say, non-designers, is take the extra step and become aware of what’s out there in fonts.


Contemporary Design Landscape

(This article originally ran in March of this past year. Dan Blanchette is taking December off. New articles will appear in January 2019.)

I’d been applying for freelance work recently, and one of the sites posted had a reference to an application I hadn’t been familiar with: Sketch.

In my digital career, among the tools I’d become proficient with were Adobe Photoshop, InDesign, and Illustrator; Strata 3D; and a bunch of photo RAW things like and including Capture One. I’d also dabbled with a few photo editors and filters that add effects to bitmapped images. So when I saw Sketch listed in the posting for freelance work as a desired attribute by the agency, I was curious as to what it is.

In searching for it, I found it is just another tool in the current landscape of communicating with other places and “teams” when collaborating on given projects.

There are other well-known applications to use for this kind of communicating. Slack is one. Sketch combines that kind of communication with added things like digital asset management, interface development, website building, and icon tools. But it’s anything other that what its name implies: sketch.

What about creating the art in the first place? It’s fine to come up with all this digital asset management and sharing across teams. Using all that stock imagery. What about the actual artwork creators? Where are those artists these days?

A close friend of mine recently was messaging me through Facebook about where we, as artists and designers—and also educators—coming from a generation before digital was even thought of as the way to do artwork, stand in today’s realm of art and design. She was taken aback by noting that art and design students currently do not know how to draw, and are not required to learn so.

And she’s right. One of the classes we attended as formative students in the discipline was anatomy. It was necessary to know anatomy for proficiency in figure drawing. And although it was not necessary to have that talent to totally succeed at the college, the ability to draw—to sketch—was.

She mentioned that her son in his capacity at a firm which employs several designers was one of a bare handful who could actually draw, even now considered an asset at that place. But it’s largely true that most art schools these days do not teach students to actually draw. And I find that unbelievable.

It’s like that grade schools do not teach students how to do handwriting. Cursive handwriting hasn’t been taught in elementary education for years. Those kids do not know how to do their own signatures.

Are we totally that different from baby-boomers to millennials? Apparently. We can easily see the way small children have learned how to manipulate gaming devices and smart phones. It’s part of their early learning now. And that kind of instant interactivity has become the norm.

It does not matter to them what they are missing in the process of getting from point A (or zero) to point B (winning the game); or the process of getting from point C (having a blank canvas) to point D (having a piece of art). They never learned the value of actually making the art, seeing the picture developing from their own hands.

Years ago when I was an illustrator, I was visiting a photographer friend of mine and admiring his work. After listening that I was interested in developing a skill for it, he looked at me and said, “I don’t know why you as an illustrator find this so fascinating. I admire your ability because you make something from nothing.”

That insight stayed with me for a long time. It made me value the talent I had more.

Maybe drawing and sketching is not valued any longer. I certainly have not seen it used in any form in the last twenty years on the job, in the last four positions I had in the design industry.

I remember learning the digital way back in the early 90s, learning how to “draw” in Adobe Illustrator. Even then I felt the name of that application was a misnomer.

To this day, I feel more akin to Leonardo DaVinci than I do to any digital artist. I still draw and sketch my ideas on paper. I will visit this subject again.


The New Illustration

(This article originally ran in May of this past year. Dan Blanchette is taking December off. New articles will appear in January 2019.)

I subscribe to The Atlantic, one of the oldest publications in the history of this country. It has thought-provoking articles written by really good journalists. And it has what might be labelled fair art accompanying those articles.

Other publications have good artwork as well, like The New York Times Magazine.

Tim Tomkinson created the image on the left for The Atlantic. It’s a more traditional style of illustration, requiring some actual draftsmanship. The artwork on the right, created by Ryan Snook for The New York Times Magazine, has a much different style.

What’s the difference? And why are they so different? And how do they affect the viewer?

Sure, Tomkinson’s piece accompanies an article about an actual person, Abigail Allwood, a scientist with NASA’s Jet Propulsion Laboratory, while Snook’s accompanies an article called “Crying at Movies”. But the art director at The Atlantic must’ve felt strongly about using an illustrator whose style was toward realism, whereas the person calling the shots at The New York Times Magazine probably said something like “anything goes”.

Weeks ago, I wrote about the decline of teaching actual drawing and illustration in art schools, which, when you think about it, doesn’t seem to make a lot of sense. I mean, things like anatomy and perspective were taught alongside figure drawing when I was in art school. Those things weren’t absolutely necessary for painting disciplines, but they were for commercial illustration.

So I’m open to discussion about why drawing is no longer considered a necessary attribute when it comes to creating qualitative commercial illustration, although I have my own theory why that is.

You see it all the time these days, the newer styles: much more like expressionism than realism. Expressionism plays to emotional reaction. As history will tell us, expressionism in painting came about after the impressionist period in the last portion of the nineteenth century. Impressionists taught the world (or those who visited art galleries and went to art openings) a new way of seeing. And that way of seeing was with your inner eye—meaning your brain—and not so much with your logical, or outer, eye.

Expressionistic art was also done in a time of upheaval in the world: the breakdown of the gilded age of kings and queens, the revolutions in Europe, the world wars. If you’re at all a student of art history, you know of art imitating life. Broad brush strokes (often with a lot of contrast in color), faces with garish angularity, and almost primitive proportions were characteristic of the form.

Snook’s illustration is very cartoony. But you don’t have to look far to see some work done that is not quite so funny in depicting emotion, and much more emoting tension—even anger.

My theory of why this is all prominent now in publicized artwork is that we live in a very changing world. A global economy (with several nations having proprietary resources), tensions around the world (knowing that now many nations have nuclear capability), strong climate changes, immediate news on TV and the Internet. Twitter and Facebook promote reactive activity. Maybe I’m wrong. But something has spurred things along to where commercial illustration is now, to where it reflects all that noise.

There are other factors possible: younger generations have different ideas of seeing the world in art; and for everyone, using computer apps and plug-ins can easily take a photo and transform it into an illustration or even a painting, with textures and warping the perspective. Why would you need to actually draw it first? Is that why we no longer need to teach it?

Because when you think about it, how would you teach a student to think in expressionistic terms? Maybe to them, realism is just too superficial.


Designs by One Person

I’ve made no bones about my issues with “teams” doing designs. I don’t like them, they stifle the creative process, and they’re a huge waste of time.

I’ve worked in places that subscribe totally to this team effort, sometimes using up to five different designers to submit ideas on creative (and then combine parts from each submission), or they’ll submit all to a “committee” (of non-designers!) to decide what will fly to the marketing department.

The creative process starts with one idea, culled from design cues in nature and environmental surroundings, and then polished to a finish according to one’s years of visual experience. The best designs are memorable this way: they are unique in that they are always one person’s vision, one’s take on what should be. It’s really that simple.

When Cecil Beaton was called upon to do designs for the fashions in My Fair Lady, his reputation had preceded him. Sir Cecil started out as a photographer in the 1920s and eventually gained respect for his fashion photographs and was hired by (British) Vanity Fair and Vogue while also doing portraits of celebrities in Hollywood. After World War II, he became a Broadway stage and set designer and started doing costume designs and lighting designs. Lerner and Loewe hired him to do the costumes for My Fair Lady in 1956. This success earned him the design spots for Gigi in 1958, and then for the movie version of My Fair Lady in 1964. His iconic Ascot outfit worn by Audrey Hepburn is unquestionably the most famous in all of movie history, and won Beaton the Academy Award for Best Costume Design.

Raymond Loewy was born in Paris and was a World War I veteran, attaining the rank of captain in the French army. After emigrating to New York in 1919, he found work designing windows and store displays for Macy’s and Saks Fifth Avenue. In 1929, he got a commission to streamline the look of a duplicating machine for a now obscure company, but that led to other commissions for designs for Westinghouse and refrigerator designs for Sears. He designed locomotives for the Pennsylvania Railroad in the late 1930s and soon developed a working relationship with the Studebaker Corporation. He designed most of the Studebakers throughout the 1950s, and in 1961 was called upon to design a new car called the Avanti, which debuted in 1963. This car is still considered one of the finest designs in automotive history. Loewy was a renowned designer in many areas, including furniture.

If you’re ever in Bear Run, Pennsylvania near Pittsburgh, you certainly will have to take in Fallingwater, designed by the premier architect in all of America and all the world, Frank Lloyd Wright. I won’t give you a detailed history of Wright, other than to say he had a single credo that guided his every effort—designing structures that were to be in harmony with their environment. His work is very distinctive and encompasses everything from residential homes to museums and even hotels and college campuses. There isn’t nearly space enough in this blog to cover the breadth and scope of his wonderful work, but if you’ve ever been in one of his “spaces”, you’d certainly remember it. Fallingwater, a house built in 1935 over a waterfall, is his most famous design.

Each of these solitary individuals set a tone for designs during their lives that were influential and classic. They were both pioneers and trend setters. And that is something you don’t often come across.

What I don’t understand is why companies choose to ignore the tenet that design is an individual effort. In the formative years at any art and design school, that principle is taught and is borne out in the wonderful sketches and showcases that display the best work at schools such as Art Center in Los Angeles and the Rhode Island School of Design.

But something gets lost in the commercial aftermath. Corporations are run by a CEO, yet the creative decisions are made by departments further down the chart, with the marketing department weighing in more than the rest. It’s become unfortunate that it’s all about money anymore. Even an outside design agency’s creative gets kicked back several times by the client’s committees before going into final art, but not before it gets screened to smithereens by focus groups. The entire process is bigger, yet the results miniscule.

It’s amazing anything gets done, and when it does, it meets nobody’s satisfaction nearly enough. I say take a step back and look why great designs are not produced these days. What would a Raymond Loewy package design look like, or a Frank Lloyd Wright pop-up book?

Style is Gone

I’m from an era well before the advent of the computer. When I entered the “field” (as we used to call it), my portfolio had everything in it, at least everything that I wanted to do going forward. But twenty-or-so years later, the art and design business had transformed itself.

I first came into the business as a designer/illustrator. One of my mentors, a guy by the name of Fred Coe, had told me early on that my portfolio had too much in it, or not enough, depending on what art directors wanted to see. Half of the bag was illustration and half was design work, of which he told me to choose only one to promote. “They won’t know where to put you,” he said.

At first, I was lucky. I’d found a place in Cincinnati that hired me to do both disciplines. But a few years later when I moved to a bigger market—Chicago—I found I had to specialize. My design half was ignored while my illustration half was drawing the attention, and so I became a full-fledged illustrator.

Doing that work came easy for me, my style being photo-realistic. And because Chicago was—and still is—a big market in the ad business, my future looked bright as long as advertising illustration was paying my way. This was in the 1970s and 1980s, and newspapers and magazines became the showcase for my work as well as many friends and acquaintances who did similar work. With ad agencies galore and several independent art “studios” as sources, a freelance illustrator could make a lot of money.

What made it fun and interesting for all concerned was that each of us—the illustrators—had a different working style. We drew the line work differently from each other, we applied the paint or watercolor (with a brush and/or airbrush) differently from each other. Overall, we thought the art process through in our own unique way, and that thought process was what made the end results appear so different. Our work was as independently unique as much as each individual appeared standing before you. That’s what made the work so personal.

Then “progress” came along in the late stages of that latter decade. Computers were making inroads into the business, and catalogs for stock illustration began to appear. Because it all seemed to happen within a few years, ad agencies were letting art directors go. Illustrators weren’t getting the assignments as before and soon photographers were suffering the same plight. Type houses started disappearing. Stock photography was showing up. The business was changing, and it was changing rapidly. I bought up cameras and lights and a bunch of stands and booms, making my own backdrops. I needed to diversify, reinvent myself.

With commercial illustration vanishing, I found getting back into design work difficult. My contacts knew me as an illustrator, not as one who could actually do design work. But that’s another story.

When Macs made the biggest splash with system 7.1 around 1994, I bought into it. I learned Adobe Illustrator and Photoshop, QuarkXpress, and a few other applications quickly to bring myself up to snuff. I found the medium to be a great tool for processing artwork. What I did not like, however—and still don’t—was creating artwork with it. It isn’t natural.

I find it confining. The art one can do on a computer screen looks and feels mechanical, like drawing with a compass and plastic drawing aids like Alvin templates, drafting tools from an earlier age. Painter, a Corel application, tries hard to make painting as natural and fluid as it can on the screen, but falls way short. The only real advantage one can say about creating illustrations on computer is the undo feature.

What the computer did was homogenize the entire advertising industry. Since individualized illustration and much of photography were sitting in virtual purgatory, the rest of the designed imagery was being done by young “graphic designers” (then a new term) who were schooled in a new mindset of using the computer, not as a tool, but as a machine that everything was created with, like a food processor that already had all the ingredients and recipes in it. This new method of designing directly on the monitor, from scratch, was foreign to anyone like myself. These new graphic designers did not use pencil and paper—ever—to even visualize potential layouts (thumbnails) of magazine pages or of ads for products. The very thought process was short circuited.

And why would these young souls bother to actually design something? They already had templates to follow for that. And worse, this homogenization was extending across the industry. Specializing in one area—like the rest of my breed—was now a bad thing. You were expected to become adept at everything, including website design and HTML. This automaton mentality feeds the “team” process that exists everywhere, and now all members of that are interchangeable—and replaceable.

The end result of all this is the total abandonment of style. And one large elephantine reason to perpetuate this new process of non-thinking is speed. Do it quickly. Get it done now. If you take too long to actually design something new, you’re on the outs. They have templates for everything. And they have budgets for everything, too. I once lost a freelance assignment doing logos for a generic soft drink. They wanted twenty designs in two days. After three hours, the art director saw my process of initial designs on paper and handed the assignment to another, younger person, telling me I was too slow and that “there are templates you could’ve used.”

How is that different? Where is the style, the individuality? It’s gone. Everything is standardized. You take a photo of something (top left) and there are filters to change it into artwork (top right). With so many plugins and filters and morphs, a sameness in everything prevails. Even in the movies, the animated 3D cartoon you take your children (or grandchildren) to see looks the same as the next, because the software the studios use is the same.

I’ll stay with what I do as far as illustration goes. It’s watercolor. It’s natural. And it’s only me doing it.

What Are “Design Sensitivities”?

(This article originally ran in December of 2017. Dan Blanchette is taking the week off.)

I’ve written a few entries in this column with references to “design sensitivities”. What are they?

Design sensitivities are most often reflected in our personal choices. For example, in looking at the interior of your friend’s home, you can pick up their preferences for furniture choices, colors of paint, patterns on accessories, and textures. Anything you see in that home is a preference. Anything you don’t see might be be an example of an aversion to that owner’s design sensitivities.

Some people aren’t aware they have design sensitivities until they see someone else’s preferences. Everyone is different. They know they have likes and dislikes when it comes to shopping for themselves. But what they may not know is the cause of those preferences.

Most all preferences are the result of associative experiences—especially those with people you’ve known. If an acquaintance of yours, whom you dislike, wears shirts with wide horizontal stripes, that can work into your subconscious and you later find you have an aversion to that pattern in clothing. Also, if you yourself prefer to wear plaid shirts and you overhear a comment from someone that plaid shirts make you look like a second-class person, the comment may very well affect your future purchase of plaid shirts.

It’s the same with colors, shapes, and textures. This can apply to a home’s decor, a car’s interior, a painting, or even a design on placemats. A color you see can recall an item from your past, or a shape can bring to mind something you saw years ago that might’ve looked wrong for any number of reasons.

The thing is, the longer we live and the more associative experiences we have, the more we develop our design sensitivities, our preferences. For a designer, one who puts designs together from scratch, those sensitivities come to the surface immediately.

Because all those associative experiences are always just under the surface for a designer, he/she makes choices on the fly based on those visual cues, something to avoid or something to definitely use. Like an actor who can produce a certain emotion by thinking about a personal event, a designer can evoke allusions to any visual experience.

This came to mind recently while I was watching a movie one night—La La Land. Damien Chazelle, the director (and perhaps also David Wasco, the production designer, and even Austin Gorg, the art director), had a vision for the movie that keyed into a visual presentation using a color palette of primary hues. Against gradients of blue to sunset pink skies, we see clothing and lighting colors like yellows, blues, reds and greens, making for a kaleidoscope of moving poster-esque imagery that became a true visual delight to witness. This was art as much as it was a musical, maybe more so. The above images were just two of the countless colorful scenes that, to me, were like ice cream.

What I did notice in examining that visual treat was something about that color palette: the greens in the clothing were all of the lime green variety, close to maybe a Pantone 382 (if you don’t know what that is, Google it). This told me that a more obvious raw green (say a Pantone 354) was definitely a color not only outside the palette of tones chosen by the director, but that it was not in line with his design sensitivities.

If you recall, I once noted in this column that design—movies and TV included—is intentional. Anything that is not in line with one’s design sensitivities ends up on the proverbial cutting room floor.

Pharma Ads Farm ’70s Songs

I didn’t know when it started happening, but what seems like a few years ago (ten? twenty?) advertisers started using clips of old songs—with even alterations of the lyrics of those songs—for background music to sell their products on television.

According to some sources, the disappearance of jingles started happening as early as the late 1960s. Advertisers began to think that the old jingles previously used would begin to sound old-fashioned to younger ears—teenagers and young adults, more and more—certainly by the ’70s. And as we all know, advertisers like to target most of their ads toward that coveted 18-to-34 age bracket.

Of course, music itself was changing, as it always will. But how music is marketed would play a part in what happened to TV commercials, as we’ll soon see. I wrote an article on the demise of the TV jingle (see my entry from January of this year, “The Soundtrack of Our lives?”). My focus today is to show how and why advertisers are using past music to accompany their messages.

There is a consensus among advertising historians that Michael Jackson was the first to make the foray into making an already released song a part of TV advertising when he adapted his hit “Billie Jean” for a Pepsi commercial in 1984. After that, celebrity-partnered ad campaigns began popping up (RunDMC with Adidas and Madonna with Pepsi).

You’d think that perhaps advertisers were misappropriating old music for their ads—especially Big Pharma. After all, Big Pharma—the largest drug companies in this country—are spending huge sums of money to promote their meds. This year alone, to date, four of the top pharmaceutical companies (Pfizer, Eli Lilly, AbbVie, and Bristol-Myers Squib) have spent $2.81 billion on TV ads, and that’s only 40% of all the drug ads on TV. Meaning, viewers, Big Pharma will be blowing over $7 billion by year’s end. Is that crazy?

That prime age bracket for targeted ads has been augmented to include retirees when it comes to advertising meds and medical services. After all, baby boomers make up around 25% of the consumer market, and Big Pharma would be remiss in ignoring the massive potential in revenue here. And since the music industry was already making the mechanism of licensing work for whoever wanted to use it, Big Pharma naturally gravitated to songs that were most associated with the age bracket they wanted to key on. So 1970’s music—pop songs anywhere from 40 to nearly 50 years old—were natural for the pickings.

And as we’ve seen before, TV viewers remember music as a subliminal thing, and advertisers depend on this link for viewers to remember the med. Of course, Big Pharma puts these ads on TV so doctors won’t be able to ignore their patients’ questions about them, promoting the sale through the medical system itself.

The above examples show the use of ’70s tunes: left, we have Ozempic—a drug for type 2 diabetes—using Pilot’s “Magic”, a tune from 1974; at center, we see Anoro—a COPD drug—using Fleetwood Mac’s “Go Your Own Way” from 1977; and at right, we’ve got Trelegy—another COPD med—using “ABC” (one-two-three), the catchy 1970 song from the Jackson 5.

Bands and solo artists have been hurting in recent years by the industry’s way of marketing music. Streaming and selling music online has truncated the amount of money made. With no retail outlets, the way music is purchased has made it practically necessary that music artists use licensing in every and any way they can. It used to be regarded as “selling out”—making your music too “commercial”. But the tide was rolling, and too much money was at stake to be ignored. Last year, revenue from licensed music amounted to over $355 billion in the U.S.

Do I like it? No. I don’t want to remember music like this. The way the advertising industry has corraled music to its use has created a miasma of sound and imagery you can’t run away from, no matter where you are—such as in a movie theater awaiting the feature film or even watching

Almost like tones in a watercolor that run together, it all becomes a blur of subliminal noise that leads me to think of mind control.




Design Cues Show Up Everywhere

Design trends are funny. They sometimes show up in the oddest places, and across entire spectrums—categories that have nothing in common. Or so it would seem.

I don’t like the term “awareness”, as applies to an affliction (e.g. “autism awareness”). It has an ineffective, almost powerless connotation attached to it. But today I’ll use the word in conjunction with the word “design”—design awareness. This is something real that all good designers have. It’s ingrained in them.

Designers themselves are unique among visual people in that they gather mental pictures by simple observation of everything around them. They store them away in their subconscious, and then at an opportune time, that proverbial lightbulb goes on to launch an idea from it. These are what I like to refer to as design cues.

We like to think that—as laymen—design trends come about all by themselves, like the whole visual landscape’s leaning in a certain way comes about by coincidence. You walk through a clothing store and see hot pink as a predominant color, among different brands. Or you visit a few auto showrooms and see the trend of similar dark colored wheels.

All this comes about because designers will copy one another, either consciously or unconsciously. And this happens across those aforementioned categories, all because that design subconscious has that library of stored imagery waiting to be used. Some of that imagery is fresh, from a few months ago, while other mental pictures are years old.

I noticed a BMW i3 the other day (pictured at top left), an almost six-year-old model of an electric vehicle made for mostly short urban travel. It can seat four people and has a body made from a hemp composite. For those who might be interested, its range is around 100 miles on a 4.5-hour charge. (I won’t comment on the build quality of this vehicle in today’s article.)

Immediately what struck me about it was how much it looked like a shoe: it has body panels of different colors (black plus one other color) and the overall shape is stubby, not entirely unlike that of the child’s athletic shoe pictured at top right. And I didn’t have to look far to find that pic, despite the very similar color arrangement.

Was that design cue by accident? You’d have consult BMW’s design staff. Of course, they probably won’t provide an answer, but one thing is true: this particular design trend is common in more youth-oriented markets (or I should say young adult markets) where the inspiration comes from wanting to be different from the previous generation no matter what.

Case in point: those dark wheels I referred to earlier are a maturation of a design cue brought about by young drivers getting their first car that either has no hubcaps or by taking the chrome hubcaps off dad’s hand-me-down vehicle. Auto manufacturers then built on that cue, because their designers saw what was happening and made it a trend.

The same cues could’ve come about for the Honda Element, a vehicle that was on the market from 2003 until 2011 (Nissan made a similar vehicle, the Cube, made available in the U.S. from 2009 until 2014). Its upright, rather boxy shape was anything but like your parents’ car. It also came with different colored body panels (inspired by baseball shirts, or maybe just primed body panels?).

It doesn’t matter where the inspiration comes from. Design cues can come from nature (winged designs such as Chrysler’s logos), from movies (fashion designs from period films like The Great Gatsby), or from even the military (automotive designs such as the VW Thing derived from Germany’s WWII era Kübelwagen). Designers borrow from any number of sources.

So, readers, is all design—or at least most of it—original? Not by a long shot. But seeing those trends developing from visual cues amounts to real design awareness.


Some of the Best Logos Are Free

I was driving the other day where I live near Bradenton, Florida, when I noticed a sign by the side of the road: this nicely designed logo (upper left example). You just don’t see well-designed logos very often, and certainly not on signs. But there it was.

And so when I returned home, I looked it up and found that a women’s resource center (sorry, but I was ignorant here) is a community center where women can go to get assistance for all kinds of domestic circumstances, such as spousal abuse, sexual assault, legal assistance, health care, and even housing. I was impressed, on several different levels.

The fact that these centers exist attests to the generosity and concern of local communities to help women in need and offer support however they can, all for practically no money. Plus they provide for their own services to the community by offering education about their programs in meetings at schools and colleges and other municipal places, making their services known. That’s one reason I was impressed.

So I went online and did a search for other women’s resource centers around the country and their logos, on a hunch that maybe other centers’ logos were just as well-designed as this one in Bradenton. And I was mildly surprised to see that the vast majority of them have very nice logos. Well put together with clean lines and well thought out imagery.

But I think I was most impressed with the thought that these logos were done by good designers pro bono. That the designers were asked to do a design for this kind of service—and knowing the worth of them—more than probably decided to do them for nothing, just for the honor of being asked.

I personally have not done any pro bono work such as this, but I have done free work just for being asked, and I can say it evokes a certain pride in having done that. Here, in these logos pictured above, I can only imagine the kind of gratitude given on both sides of the transaction.

This points up something I’ve noticed over the years: that if given the opportunity to do a design for a worthy cause, the client will usually allow design freedom, within limits, and the designer‘s best work will show.

My first reaction to the top left example was the image of the “W” as a flower. Nicely thought out, and the Optima font goes well with it. I’ll give this one an “A”.

The top right example, for a center in Orlando, reminded me of something one of my college roommates would’ve put together. The image in the logo looks like three figures linked as in dance because it has that kind of built-in motion to it. But it’s clean and concise and reads well, even with the Gotham font that’s used, which gives it a slight generic feel. But it gets an “A-”.

The bottom left example, showing a much more casual approach, is for a center in Winona, Minnesota. The three letterforms—done on the sweet side of the color wheel—read OK (the “R” less so) are fairly well-done, but the accompanying type to the right feels a little off and too separated. This gets a “B-”.

The last example, done for a center in Greensboro, NC, is quite well-done and has a figure formed out of the well of the “O”, promoting a feeling of freedom. I like this one a lot, including the fine serif font which gives the design a real dignity. This gets an “A+”.

Fine work all around here.

Some Things You Can’t Overlook

(This post originally ran in March of this year.)

Having been in and around design my entire creative life, I cannot unsee mistakes in anything related to it. All I want to do is correct them. But I can’t. All I can do is try to ignore them, which I also cannot do. Catch that.

One of my close friends sent these images to me the other day, and they impressed me so much I felt I had to make examples of them. Literally.

Remember that in type design, readability is key. The examples above don’t all have the same issues, but they all suffer in readability.

In the first example at upper left, I can’t help but realize the type design in the yellow sign was intentional. But I can’t see the reason for it. There is no play on words, no “bucket list” correlation. It’s just a gimmick to make you stare at it and piece it together. A promo for, it’s just a cheap idea.

Next we have one of two things: Spicy Soy & Garlic, or Sp & Soy Icy Garlic. Look at it. Are you kidding me? The other design thing that makes me cringe is the pepper overlapping the type at left but the garlics at right do not overlap anything. An example of non-parallel design thinking.

Another tenet of good type design is that things generally read from left to right. We are conditioned to read things that way because we learn to read from books and other publications where the copy is in sentences. Make sense?

Next: a mug with copy reading “Take THE Time”. Except here the type is sitting against texture too complex for the chosen font and tone not contrasting enough to make it readable. And the word “THE” has its own texture competing with the background. Terrible. What—no art direction?

The last two examples are just laughable. The one at left is on an entrance to a park, and is supposed to read, “PUT PETS ON LEASH”. But the first two words have commas after them (one misplaced), possibly added after the sign was spray painted (you can see the stenciled letterforms) trying in vain to make the word spacing evident.

The signage on the restaurant facade is so funny, it’s ridiculous. BBQ Ribs on a bison silhouette is OK, I guess (ribs from a bison are easily questionable), but fried catfish from a moose would make Bullwinkle question his DNA. I’m not saying it isn’t funny, but alongside the bison it isn’t parallel design thinking.

You can bet I’ll make an issue for parallel design thinking in the future. But right now just enjoy staring at these goofy examples of horrendous type design.