Category Archives: Focusing on Process

Save time and avoid frustration during content collaborations

Here are some common workplace issues that are one focus for my technical editing students this week.

Content Collaboration #1

You’ve just started work on a document and are gathering information to include from several individuals who represent different departments at your organization. You get a response from three individuals.

  • Individual A suggests deleting an entire section of content.
  • Individual B sends you a report with information that will create an entirely new section of content.
  • Individual C gives you a thoroughly copyedited document, with punctuation, grammar, and other mechanical revisions.

Individual C has wasted time copyediting content that has been deleted. If you multiply this wasted time across the work of individuals throughout an entire organization, even a small one, you have identified a tremendous source of inefficiency.

Content Collaboration #2

You’ve been working on a document for months. It’s been reviewed by multiple individuals, representing various departments, and you are now seeking final approvals.

  • Individual D suggests deleting an entire section of content.
  • Individual E provides you with an entirely new section of content.
  • Individual F gives you a thoroughly copyedited document, with punctuation, grammar, and other mechanical revisions.

The actions of individuals D and E are the source of your frustration. They should have provided these changes in earlier reviews.

A 2018 industry study found that collaborating on workplace documents was one of American businesses most broken processes. To reduce inefficiency and frustration, you can adopt tools designed for content collaboration. But understanding what professional editors call “levels of edit” will help regardless of your tool. Read on for an explanation.

A Primer on Levels of Edit

I’m going to start with a little background, so skip ahead if that doesn’t interest you today.

Origins

Most of us recognize the origin of levels of edit as the result of a publication by two technical editors named Robert Van Buren and Mary Fran Buehler from the Jet Propulsion Lab under a NASA contract , which was published in 1980.

JPL Editorial Work TypesLevel 1 EditLevel 2 EditLevel 3 EditLevel 4 EditLevel 5 Edit
CoordinationXXXXX
PolicyXXXXX
IntegrityXXXX
ScreeningXXXX
Copy ClarificationXXX
FormatXXX
Mechanical StyleXX
LanguageXX
SubstantiveX

Here’s how they differentiated between levels of edit.

In a Level 5 edit, the editor verifies that JPL policy has not been violated, routes the manuscript through the various production processes and performs a liaison function between the author and publications personnel… [T]he editor, performing a Level 3 edit, will be required to clarify the copy for the compositor and to indicate the format… And in a Level 1 edit, the full range of editorial capabilities is applied to produce a first-class publication.

Van Buren, R., & Buehler, M. F. (1980). The Levels of Edit, JPL Publication 80-1. Jet Propulsion Lab., California Inst. of Technology, Jan.

The JPL system is specific to an organization and a genre. Let me introduce a more generic and simpler system that categorizes editorial activities.

Basic Principles

First, let’s make sure we’re on the same page about the types of content that are relevant to this post. I’m focused only on workplace or professional contexts. Some, but not all, of what I have to say is relevant in the context of book or periodical publishing.

Here’s a way of categorizing the possible levels of edit based on the professional editorial standards published by the Editors Association of Canada.

  • Structural editing is assessing and shaping material to improve its organization and content often communicated to an author in an email or meeting.
  • Stylistic editing is clarifying meaning, ensuring coherence and flow, and refining the language often marking up the author’s digital file, for instance, using tracked changes and inserting comments in Microsoft Word.
  • Copyediting is ensuring correctness, accuracy, consistency, and completeness, usually done in the same way as stylistic editing.
  • Proofreading is examining material after layout or in its final format to correct errors in textual and visual elements often marking up “special” files like a PDF produced from Adobe InDesign for a production professional.

The different levels of edit actually define the different types of professional editors.

Application

How could applying the levels of edit increase efficiency and reduce frustration in the situations described at the top of this post?

When sending material, give people instructions like these:

Please respond by supplying or correcting information. You can email me or send me documents with information that is needed. You can also insert comments into the document file. Turn on tracked changes if you add information directly into the body of the document.

You may also suggest reorganizing content. Please do not focus on wording or mechanics or format at this stage. You’ll have the opportunity to help with this after the content and organization are finalized.

Levels of edit applied in Content Collaboration #1

To save time and frustration when collaborating with others during content production, make sure people know what level of edit you’re requesting. Be explicit. You may have to provide a couple of examples (e.g., a “do this” and “don’t do this”). What directions would you provide for Content Collaboration #2?

Ideally, you would get together with all of the stakeholders and agree to a review process that assigns the appropriate levels of edit to each collaborator. If you can’t make that happen, you can still use levels of edit to make your workplace collaborations less frustrating.

Sources for Learning More

Buehler, M. F. (1977). Controlled Flexibility in Technical Editing: The Levels-of-edit Concept at JPLTechnical Communication24(1), 1–4.

Buehler, M. F. (1988). The Levels of Edit as a Management ToolIPCC ’88 Conference Record ‘On the Edge: A Pacific Rim Conference on Professional Technical Communication’., doi: 10.1109/IPCC.1988.24000.

Grover, S.D. (2021). Levels of Edit. Grover’s English (blog site).

Grover, S.D. (2021). Types of Editing. Grover’s English (blog site).

Reaching (and respecting) veterans with plain language

To honor our US veterans today, let me share an example of exemplary writing practice from the Veterans Benefits Administration (VBA).

A team working on a form wanted to use the question, “When were you last (gainfully) employed?” They felt that the term “gainfully employed” would gather more legally sufficient and accurate information than just the word “employed.”

Testing showed that readers used at least three different definitions of “gainful” employment:

  • Any job
  • A job that provides benefits or where you can put money away
  • A job that keeps you above poverty level

In fact, research showed that different government agencies may have different definitions of “gainful.” But, more importantly, because each reader had a different definition of the word, the agency would have gotten less accurate information if the word had been in the document.

Check out other examples from the VBA. They have a history of testing messages they will send to veterans.

The federal government recommends several types of testing in its plain language guidelines.

  • Paraphrase Testing: individual interviews, best for short documents, short web pages, and to test the questions on a survey
  • Usability Testing: individual interviews, best for longer documents and web sites where finding the right information is important; also best for forms — see http://www.usability.gov.
  • Controlled Comparative Studies: large scale studies where you don’t meet the people but you collect statistics on responses; use paraphrase testing and usability testing on a smaller scale first.

Testing is a best practice for any organization that cares about the effectiveness of its messages. It’s a pleasure to share something positive about our federal government–especially today, when we remember all of the men and women who have served on the behalf of the rest of us!

Plain language requires attention to the process


The first three parts of my series on defining plain language focused on the three aspects of the rhetorical triangle: (1) textual elements like style and organization, (2) reader outcomes like comprehension and usability, and (3) writer outcomes like organizational costs and benefits. To overcome the limitations of any one of those aspects when considered alone, several experts talk about plain language as the process by which successful workplace documents are created — a process in which all three aspects are integrated.  That’s my focus here in Part Four.

Minimally, creating a document requires drafting: putting words on paper (literally or figuratively). Sadly, amateurs operate as if this single step or activity = writing. To enhance the quality of documents and move amateurs toward expert status, writing teachers regularly add two additional steps to the process: intentional planning before you draft and revising after you have drafted. The wisest of these folks also promote some form of document testing in order to determine what and how to revise.

Let’s talk about the phases of the process shown in red using examples from the proposed mortgage loan estimate document created for the US Consumer Financial Protection Bureau (CFPB) project called Know Before You Owe by Kleimann Communication Group. I mentioned their exemplary work in an earlier post.

Planning

Pro workplace writers work their way through heuristic questions about the rhetorical context, message content, and content organization before they draft a document. Last year, I wrote a series of posts about a letter soliciting sponsorships for an outdoor sign at The First Tee of Tuscaloosa:

I don’t want to repeat what I wrote in those pieces. So let me just highlight some examples of the planning process for the loan estimate document.

In general, written messages are preferred for complex content and for situations in which a record of message delivery is needed. In the case of the loan estimate, both conditions were met. The Dodd-Frank Act required lenders to disclose information about mortgage loans to their customers after they apply for a mortgage and shortly before they complete the process.  It also required combining the information from two separate forms.

Given the widespread interest in banking and financial practices, the CFPB judged that the high cost of creating a quality loan estimate document was worth it. One way they corroborated this interest was by holding a symposium with consumers and industry representatives in December of 2010.

The writers of the loan estimate knew their readers were the heterogeneous group of US consumers. They also knew some general things about that group of readers: (a) their ability to understand message content was low because the level of quantitative and financial literacy is low for around 50% of US adults and (b) their willingness to attend to and use the message content to make decisions was low because that is a characteristic of all US consumers. The team of writers had to develop both informative content (like examples and definitions) to address the audience’s inability to understand mortgage loan details and persuasive content (like evidence) to address the audience’s unwillingness to pay attention to and use those details.

Before they began drafting, they read relevant documents and research, talked to CFPB staff and consumers, and brainstormed. After they began testing drafts, the writers also met with small business owners about using the new loan estimate.

Testing

Testing a draft document is the phase of the workplace writing process that most distinguishes you as a pro. I haven’t written much about the types of document testing. My goal is not to be exhaustive here. Instead, let me share how the writers of the loan estimate tested their document.

First, they posted drafts of the documents on the Know Before You Owe website and collected 27,000 comments.

Second, they used parallel design tests by creating two significantly different designs for the loan estimate. They implemented those two designs in eight document versions based on four different loan details. In Round 1, they presented these document versions to seven English-speaking consumer participants, two Spanish-speaking consumer participants, and two mortgage lender participants in Baltimore. Using one-on-one interviews, they asked participants to read-and-think aloud and to answer comprehension questions; they also asked lenders to answer implementation questions.

Even when the benefit of a quality document doesn’t warrant the expensive testing done for the loan estimate, testing is still possible — and warranted. Editorial or expert review is certainly less costly. But even reader testing can be done on a shoestring budget. To learn more about testing documents, you can begin with the guidelines on usability testing from the federal government’s plain language site.

Revising

For the loan estimate, testing suggested numerous revisions to the textual elements in both designs. Among the most serious issues identified from Round 1 testing, the writers learned (a) consumer participants needed more clarity around the monthly loan payment and the total monthly payment; and (b) they needed to address how the design could encourage consumers to read the Cautions at the same time as they read the Loan Terms. So writers revised a number of textual elements in both designs. A few specifics for Design 1:

  • To make the loan amount more prominent, they added the heading Loan Amount.
  • To simplify cognitive processing and keep all cautions together, they combined the content of Key Loan Terms and Cautions.
  • To draw attention to cautions, they added more emphasis to this word in headings with shading and all caps.

Testing & Revising (Again)

The writers used an iterative design process. That means they tested the revised versions of the loan estimate in Round 2 in Los Angeles. And then revised the content, organization, and style some more. And tested again in Round 3 in Chicago. And revised more. In all, there were ten rounds of testing in cities across the US followed by revisions.

So . . .  What is plain language?

After four rather lengthy posts, here — with a drumroll — is how I understand the concept of a plain language document.

  1. It can be described as a set of text elements, including content (e.g., examples), organization (e.g., headings), style (e.g., verb voice), and mechanics (e.g., punctuation).
  2. The outcome of delivering the document is that it achieves the writer’s purpose, while minimizing costs and maximizing benefits for the writer’s organization.
  3. Other outcomes of delivering the document are target reader interest, comprehension, and ability to use the writer’s message.
  4. The only way to produce a plain language document (one that achieves #2 and #3) is to use a process for choosing text elements that incorporates planning, testing, and revising.

And, finally, describing plain language with fewer than all four of the items above is like describing only a portion of the elephant.

Beware of those who claim plain language is simpler than this. That would include those who make promises based on software tools or platitudes. And pay attention to those who include all four items. (See, for instance, the plain language checklist from the Center for Plain Language.)

No wonder becoming a pro who can produce plain language takes so much time and effort! But that’s what makes a pro so valuable.

If content is king, then usability is queen

You’ve heard me say how important reader testing is when you truly care about meeting the needs of your audience. The Before and After Gallery hosted by the DigitalGov User Experience Program provides some terrific examples. [6/16/14 Update: examples appear to have moved to Government Usability Case Studies.]

After watching some representative readers use the Fueleconomy.gov Mobile Site, the web writers identified several issues they could revise easily (like clarifying terminology and moving buttons) to improve their readers’ experience. The graphic at right shows how they measured the impact of those changes.

Reader testing of written messages fits inside the practice sometimes called user-centered design or UCD. Yes, UCD is associated with software or other digital products and their users. But readers are users of information. If you want to explore the benefits of UCD and testing, check out this 6-minute video.

Asking your audience for feedback. And signalling you listened by making changes that help them use your information. That’s just good business — even when it comes from the US government.

A simple way to test your reader’s response before document delivery

I’ve said it before. I’ll say it again. Nothing signals your status as a pro workplace writer as much as testing an important document with representative readers before you deliver it.

But reader testing can be expensive. You need equipment and training to conduct eye-tracking studies. Thanks to two of my Dutch colleagues, Menno de Jong and Peter Jan Schellens, there is a simple and inexpensive technique I recommend. In fact, we have our undergraduate business students use it.

In the plus-minus technique, the writer recruits a few representative readers. More and better representatives give you better results. (Here’s a free report from experts on recruiting folks for this kind of testing.) The writer gives the readers the draft document and collects their impressions.

Here are the instructions we provide readers for document testing.

Please read the entire document, including the parts that you would probably skip “in real life.” Take your time. Read the document at your own speed.

1. While reading, we want you to place a plus (+) and a minus (-) in the margin anytime you judge something as positive or negative. But do not think too long about your impressions. Any plus or minus is okay, as long as it reflects what you are thinking as you read.

For example, if you find something in the document funny, interesting, clear, or important, write a plus. If you find something in the document not interesting, unclear, or unimportant, write a minus. You can write down pluses and minuses for any reason.

2. Please indicate which part of the document a plus or minus applies to by circling or underlining it.

Decide for yourself which units of the document to mark with a plus or minus. For instance, you can write down a plus or minus for a paragraph, a heading, a sentence, a word, an illustration, or a caption.

3. When you have finished, we will ask you to talk about your impressions of this document.

After the readers are finished, someone debriefs them. This can be the writer — although it’s probably best if the reader doesn’t know the interviewer wrote the draft.  Here are the instructions we provide for the interview.

Tell the reader you would like to learn about their impressions of the document so that it can be improved. Remind him/her that all of his/her responses are important to you. Ask if you can audiotape your interview. If you do not audiotape, make sure you go slowly enough to get all information written down.

Ask the reader to describe what they were thinking for each plus and minus. Get the reader to be as specific as possible about the places where he or she felt positively or negatively. You must be able to identify specific words, paragraphs, headings, figures, etc. later during your analysis of the results. As soon as possible after the interview, summarize your interpretation of the reader’s responses using the questions below. (Not all questions will be relevant for all interviews.)

    1. Does the reader correctly understand the given information? What worked well? What is unclear?
    2. Will the reader pick up the document and start reading it? Why or why not? Will the reader focus on the most important information? Why or why not?
    3. Can the reader apply the given information in a productive way and in a realistic setting? What seems to be most helpful? Where does the reader go wrong?
    4. Does the reader find statements to be credible? Which ones? Why or why not?
    5. Does the reader like the way or the order in which information is presented? What worked well? What needs to be revised?
    6. Does the reader get new, relevant and complete information? What does the reader like? What does the reader want to skip?

The results you collect from representative readers will identify any major issues with your draft and help you determine what should be revised before you deliver the document to your actual readers.

The plus-minus technique allows you to balance the importance of concurrent testing (i.e., learning what the reader is thinking in real time) with the simplicity of retrospective testing (i.e., learning what the reader thinks after he or she has read your document). You can learn more about reader testing (usability testing) from the Center for Plain Language or the Nielsen Norman Group (user testing).

Pros test their draft documents — with readers

Photo Credit: estherase via Compfight cc

Because pro writers recognize their limitations, they adopt practices designed to overcome them. One of those practices is testing a draft before delivering the final document. That’s why I highlighted the practice within my post: What is plain language? (Part Four: Putting It All Together in a Process).

There are lots of document quality testing methods. While the source is “aging,” Karen Schriver’s 1989 summary of methods is the best overview I’ve seen. She provides the figure shown below, which displays methods on a continuum from text-focused to expert-focused to reader-focused.

Schriver concluded,

When practical considerations such as time and expense allow, reader-focused methods are preferable to text-focused and expert-judgment-focused methods because they shift the primary job of representing the text’s problems from the writer or expert to the reader. Thus, reader-focused methods help minimize the chances of failing to detect problems. In addition, reader-focused methods expand the scope of text problems that get noticed, shifting the evaluator’s attention to global problems, especially problems of visual and verbal omissions.

So reader-focused methods are the best tests of document quality. Although reader testing can be expensive, cost is no reason to dismiss it. Jakob Nielsen made the argument convincingly in Guerrilla HCI: Using Discount Usability Engineering to Penetrate the Intimidation Barrier. Reader testing can be a simple as asking your actual reader to review a draft.

A while back, I promised a short video tutorial on reader testing. It’s on my list of to-do items for this summer . . . In the meantime, you can check out usability testing from the Center for Plain Language or user testing from the Nielsen Norman Group.

What is plain language? (Part Four: Putting it all together in a process)

The first three parts of my series on defining plain language focused on the three aspects of the rhetorical triangle: (1) textual elements like style and organization, (2) reader outcomes like comprehension and usability, and (3) writer outcomes like organizational costs and benefits. To overcome the limitations of any one of those aspects when considered alone, several experts talk about plain language as the process by which successful workplace documents are created — a process in which all three aspects are integrated.  That’s my focus here in Part Four.

Minimally, creating a document requires drafting: putting words on paper (literally or figuratively). Sadly, amateurs operate as if this single step or activity = writing. To enhance the quality of documents and move amateurs toward expert status, writing teachers regularly add two additional steps to the process: intentional planning before you draft and revising after you have drafted. The wisest of these folks also promote some form of document testing in order to determine what and how to revise.

Let’s talk about the phases of the process shown in red using examples from the proposed mortgage loan estimate document created for the US Consumer Financial Protection Bureau (CFPB) project called Know Before You Owe by Kleimann Communication Group. I mentioned their exemplary work in an earlier post.

Planning

Pro workplace writers work their way through heuristic questions about the rhetorical context, message content, and content organization before they draft a document. Last year, I wrote a series of posts about a letter soliciting sponsorships for an outdoor sign at The First Tee of Tuscaloosa:

I don’t want to repeat what I wrote in those pieces. So let me just highlight some examples of the planning process for the loan estimate document.

In general, written messages are preferred for complex content and for situations in which a record of message delivery is needed. In the case of the loan estimate, both conditions were met. The Dodd-Frank Act required lenders to disclose information about mortgage loans to their customers after they apply for a mortgage and shortly before they complete the process.  It also required combining the information from two separate forms.

Given the widespread interest in banking and financial practices, the CFPB judged that the high cost of creating a quality loan estimate document was worth it. One way they corroborated this interest was by holding a symposium with consumers and industry representatives in December of 2010.

The writers of the loan estimate knew their readers were the heterogeneous group of US consumers. They also knew some general things about that group of readers: (a) their ability to understand message content was low because the level of quantitative and financial literacy is low for around 50% of US adults and (b) their willingness to attend to and use the message content to make decisions was low because that is a characteristic of all US consumers. The team of writers had to develop both informative content (like examples and definitions) to address the audience’s inability to understand mortgage loan details and persuasive content (like evidence) to address the audience’s unwillingness to pay attention to and use those details.

Before they began drafting, they read relevant documents and research, talked to CFPB staff and consumers, and brainstormed. After they began testing drafts, the writers also met with small business owners about using the new loan estimate.

Testing

Testing a draft document is the phase of the workplace writing process that most distinguishes you as a pro. I haven’t written much about the types of document testing. My goal is not to be exhaustive here. Instead, let me share how the writers of the loan estimate tested their document.

First, they posted drafts of the documents on the Know Before You Owe website and collected 27,000 comments.

Second, they used parallel design tests by creating two significantly different designs for the loan estimate. They implemented those two designs in eight document versions based on four different loan details. In Round 1, they presented these document versions to seven English-speaking consumer participants, two Spanish-speaking consumer participants, and two mortgage lender participants in Baltimore. Using one-on-one interviews, they asked participants to read-and-think aloud and to answer comprehension questions; they also asked lenders to answer implementation questions.

Even when the benefit of a quality document doesn’t warrant the expensive testing done for the loan estimate, testing is still possible — and warranted. Editorial or expert review is certainly less costly. But even reader testing can be done on a shoestring budget. To learn more about testing documents, you can begin with the guidelines on usability testing from the federal government’s plain language site.

Revising

For the loan estimate, testing suggested numerous revisions to the textual elements in both designs. Among the most serious issues identified from Round 1 testing, the writers learned (a) consumer participants needed more clarity around the monthly loan payment and the total monthly payment; and (b) they needed to address how the design could encourage consumers to read the Cautions at the same time as they read the Loan Terms. So writers revised a number of textual elements in both designs. A few specifics for Design 1:

  • To make the loan amount more prominent, they added the heading Loan Amount.
  • To simplify cognitive processing and keep all cautions together, they combined the content of Key Loan Terms and Cautions.
  • To draw attention to cautions, they added more emphasis to this word in headings with shading and all caps.

Testing & Revising (Again)

The writers used an iterative design process. That means they tested the revised versions of the loan estimate in Round 2 in Los Angeles. And then revised the content, organization, and style some more. And tested again in Round 3 in Chicago. And revised more. In all, there were ten rounds of testing in cities across the US followed by revisions.

So . . .  What is plain language?

After four rather lengthy posts, here — with a drumroll — is how I understand the concept of a plain language document.

  1. It can be described as a set of text elements, including content (e.g., examples), organization (e.g., headings), style (e.g., verb voice), and mechanics (e.g., punctuation).
  2. The outcome of delivering the document is that it achieves the writer’s purpose, while minimizing costs and maximizing benefits for the writer’s organization.
  3. Other outcomes of delivering the document are target reader interest, comprehension, and ability to use the writer’s message.
  4. The only way to produce a plain language document (one that achieves #2 and #3) is to use a process for choosing text elements that incorporates planning, testing, and revising.

And, finally, describing plain language with fewer than all four of the items above is like describing only a portion of the elephant.

Beware of those who claim plain language is simpler than this. That would include those who make promises based on software tools or platitudes. And pay attention to those who include all four items. (See, for instance, the plain language checklist from the Center for Plain Language.)

No wonder becoming a pro who can produce plain language takes so much time and effort! But that’s what makes a pro so valuable.

Link

I like to highlight best practices in writing for the workplace when I see them. Here’s a terrific example. This morning, Judy Knighton posted Listen to your readers! at Write, “a professional services firm that helps government and business organisations create clear, reader-friendly communications” located in New Zealand. I’ve written about audience analysis and posted a video tutorial on the topic here before. I’ve also highlighted how the usability process can be successfully used to develop written materials. In that post, I noted that some version of reader observation can be used by anyone writing in the workplace.

Here’s part of what Judy wrote about her recent reader observations:

I’m doing a series of user tests on an investment statement for a KiwiSaver scheme. I’m using a couple of test methodologies. In the first part of the test, the reader goes through a section of the investment statement and talks about what they’re thinking as they read. In the second part, they answer some specific questions about the content so that I can see whether the information was easy to find and understand.It’s fascinating watching different reading strategies at work. Yesterday, I conducted three tests and saw three completely different strategies.

Read everything

Reader one started at the beginning of the Key Information section, and read every line and every word. At each cross reference to more detailed information, she turned to that page and read the detail before going back to continue with the Key Information section.

Read summary in order, and skim the rest

Reader two started at the beginning of the Key Information section and read it through. She skipped a few paragraphs when the headings indicated that the content wouldn’t interest her. She then started on the detailed information and skimmed through the headings, stopping to read detailed content that discussed questions she had in her mind from the Key Information section.

Read what looks interesting, and then find a real person to question

Reader three flipped through the document from the back. He then opened the Key Information section, skipped past the first page because he thought from the headings that it would tell him stuff he already knew, read a paragraph or two, skipped some more sections because he decided they didn’t apply to him, and finished the Key Information section in record time. He then turned back to read in detail some of the information he skipped, this time turning for more detailed information at the cross references. Deciding that the detailed information was too detailed, he returned to the Key Information section and read most of it, coming up with a short list of questions that he said he’d phone in.

Write for your readers

To me, this demonstrates the power of headings in writing for your readers – and the power of user testing to find out whether you’ve succeeded.

I couldn’t agree more with Judy’s conclusions. There’s no substitute for actually observing your readers deal with a document — despite how humbling the experience is (if you’re the writer).

And, when I get time, I’ll create a short video tutorial on reader testing. It is one of the cornerstones of the undergraduate business communication course I taught for decades! Here are two short explanations of various document testing methods: usability testing from the Center for Plain Language and protocol tests and focus groups from the US Air Force. Stay tuned . . .

Pros plan message organization strategically

This post follows up on a couple of earlier ones about a letter soliciting sponsorships for an outdoor sign at The First Tee of Tuscaloosa. Pros don’t settle for platitudes about audience described the principled way in which we analyzed our audience.  Pros plan message content strategically described how we developed the content for the first draft. This one describes how we planned to organize the content for the first draft. And, based on our organizational blueprint, I (finally) provide a copy of our first draft.

Planning the Solicitation Letter

You shouldn’t be surpised to hear me say that pros think carefully about purpose and audience when planning a message so they can be STRA-TE-GIC. Let me briefly review the two areas covered in my previous posts.

Purpose & Audience: As a board member, my husband was writing a letter soliciting sponsorships.  (I helped ’cause that’s the kind of spouse I am.) The purpose of this letter was to direct readers, and the bottom line message was “give us $500.”  In sum, we were addressing a large, moderately homogenous group of strangers with little relevant expertise and moderately low sensitivity to the request because it created only a small imposition.

Message Content: Using what we knew about the rhetorical context, we started developing content for the letter. My earlier post demonstrated techniques from Revising Professional Writing (RPW). Basically, to make readers ready to accept the letter’s request, we had to address their lack of expertise by developing informative content and their sensitivity by developing persuasive content. As a result, we had a laundry list of ideas for inclusion in the letter:

  • Operational definition of The First Tee program. (mission statement, etc.)
  • Descriptions of several aspects of program. (the nine core values, cost of the program, size of the national and local programs, etc.)
  • Descriptions of sponsorship-related costs and advertising details.
  • Examples of impact of the program on a specific child. Or the way values are taught within a specific lesson.
  • Contrast with junior golf.
  • Classifications of sponsorship levels.
  • Major claims: (a) First Tee of Tuscaloosa is a “winning” organization; (b) the sponsorship sign will serve as decent advertising.
  • Evidence: For (a), we should mention local participant numbers, as well as national growth; also national premier events like Pebble Beach tournament and honorary chairperson, George W. Bush. For (b), location of sponsorship sign and traffic at location.
  • Evidence Criteria: For (a), participant data establishes growth (=winning) and national tournament and chair signal involvement of celebrities (=winning). For (b), location and traffic by signage establish potential market (=decent advertising).
  • Potential objections: Some may believe the kids would be better off playing football.  (clearly, they’re NOT thinking about the girls in the program)

The list represents the result of our brainstorming about content for the letter. Its structure wasn’t meaningful. Much work remained before we could create a decent draft.

Message Organization: To begin providing some structure to our laundry list, we grouped the items in a diagram. I always encourage amateurs to use some type of visual technique for grouping ideas generated from brainstorming. This isolates decisions about grouping from those about sequencing. The diagram we produced is shown at right. The bottom line message is connected to four groupings of content.  This wasn’t the only useful way of grouping the content. We might have used two major groups (one for The First Tee and one for the sponsorship sign).  As with all communication decisions, there is no single right choice.

One of the things that happened while we grouped our content was that we decided to ignore the potential objection in our laundry list. It didn’t seem that important as an obstacle to achieving our goal. Or like something we could realistically address in the letter anyway.

The next step in planning the organization of our letter was to determine a sequence for our content. We used what we know about the genre of solicitation letters. In other words, we needed to think about what our audience would expect to find in our letter based on the others they have read. Most writers use samples of similar messages as a guide to inductively determine the appropriate sequence of information content + purpose for a document type that’s new to them.

We could have analyzed some samples because neither us felt confident in our knowledge of solicitation letters. But we didn’t have to because applied linguists (like Bhatia) have identified the following sequential moves (content + purpose) in sales letters by studying them:

  1. establishing credentials
  2. introducing the offer
  3. offering incentives
  4. referring to enclosures
  5. inviting further communication
  6. using pressure tactics
  7. ending politely

Applying these moves to the content diagram for our letter resulted in the blueprint at left. Note that we didn’t follow Bhatia’s move structure exactly. For instance, we decided we could put almost all details about the offer in an enclosed brochure so move #4 (referring to enclosures) appears right after #2 (introducing the offer), which could then be very briefly covered in the letter itself.  That choice was strategic because we believed we needed to devote more of the letter’s real estate to move #3 (offering incentives), which we saw as primarily altruistic in the case of this solicitation letter. And we thought describing the basics of The First Tee program was important to overcome our audience’s lack of knowledge about it.

We didn’t anticipate the need to use pressure tactics (move #6) or to include a polite ending (move #7) beyond inviting further communication. We thought the standard close of “sincerely” was enough. The fact that Bhatia studied letters from an English-speaking Asian culture (Singapore) may explain the need to include more closing remarks than required in our North American culture.

Drafting the Solicitation Letter

Amateurs who don’t plan sufficiently hate drafting. But, once we created the blueprint for our solicitation letter, we were more than ready to transform it into a draft.  I took on the task of translating the blueprint into sentences and paragraphs (’cause I’m faster at keyboarding than my hubby is).  This was so simple that it warrants little discussion. See the Draft Solicitation Letter to view the result.

With amateurs, I usually warn them to ignore any sentence- or word-level issues (punctuation, word choice, etc.) while drafting. It’s simply inefficient — especially when you don’t know what you might end up deleting. Any infelicities can easily be taken care of when revising the draft. More about “levels of editing” in the next, and final, post about creating the letter.

Stay tuned . . .

Related articles

Pros plan message content strategically

A while back, I included a personal example of audience analysis. Thanks to Tom Orr for suggesting that I follow up by showing y’all the letter. I decided it might be helpful to share even more of the process — like how we developed the content for the first draft. So here goes.

Purpose & Audience: As a board member, my husband is writing a letter soliciting sponsorships for an outdoor sign at First Tee of Tuscaloosa .  (As a supportive spouse, I’m helping.) The purpose of this letter is to direct readers, and the bottom line message is “give us $500.”  In sum, we are addressing a large, moderately homogenous group of strangers with little relevant expertise and moderately low sensitivity to the request because it creates only a small imposition. (To review our full audience analysis, see Pros don’t settle for platitudes about audience.)

Pros think carefully about purpose and audience before they write so that they can be STRA-TE-GIC.

Message Content Development: To make readers ready to accept the letter’s request, we have to address their lack of expertise by developing informative content and their sensitivity by developing persuasive content. The sources of information we have available include: the First Tee of Tuscaloosa’s board notebook, First Tee brochures from the national organization, a previous solicitation letter from the First Tee of Tuscaloosa Director, my husband’s notes from a conversation with the Director, and my husband’s personal knowledge of golf and First Tee, as well as his years of business experience.

I made my husband crazy (hey — it’s part of my job) by forcing him to answer questions based on the techniques from the Revising Professional Writing (RPW) chapter on developing informative before I would begin a draft.  (See my Tutorial on Developing Informative Prose for more instruction.)

  1. What needs to be defined for this audience? He thinks the First Tee program should be operationally defined. We can use the mission statement.
  2. What should be described for this audience? Several program-related descriptions. Definitely the nine core values: honesty, integrity, sportsmanship, respect, confidence, resonsibility, perseverence, courtesy, and judgment. Also the cost of the program (both to the organization and to participants). The size of the national and local programs. Sponsorship-related descriptions. Definitely what advertising goes with the cost of sponsorships.
  3. What would be easier for this audience to understand with examples? Maybe the impact of the program on a specific child. Or the way values are taught within a specific lesson.
  4. What should be compared or contrasted for this audience? The program should be contrasted with junior golf. And maybe with competitive sports. (But that might backfire.)
  5. What needs to be classified for this audience? The sponsorships because my husband and the Director decided they should offer two levels.

I kept testing my husband’s patience level by asking questions based on the four areas of developing persuasive prose, following Stephen Toulmin, from a chapter in RPW. (See my Tutorial on Developing Persuasive Prose for instruction).

  1. What claims will benefit from evidence for this audience? (a) First Tee of Tuscaloosa is a “winning” organization; (b) the sponsorship sign will serve as decent advertising.
  2. What evidence will be persuasive for this audience? For (a), we should mention local participant numbers, as well as national growth; also national premier events like Pebble Beach tournament and honorary chairperson, George W. Bush. For (b), location of sponsorship sign and traffic at location.
  3. What criteria help this audience interpret evidence persuasively? For (a), participant data establishes growth (=winning) and national tournament and chair signal involvement of celebrities (=winning). For (b), location and traffic by signage establish potential market (=decent advertising).
  4. What objections from this audience should be anticipated? I’ve lived in Tuscaloosa long enough to suspect a few may believe the kids would be better off playing football.  (Clearly, they’re NOT thinking about the girls in the program.) I don’t think we can overcome this objection.

I should mention that my husband insisted we use some relevant portion of the Director’s previous letter.  It’s a reasonable request. I know how the world works. (Thank goodness the Director is a pro writer!)

Before I can share a draft of the letter, I need to work on a post about organizing content strategically. And I have a video tutorial I could update on the writing process, too.  Coming soon . . .