The digital marketing landscape is changing at a rapid pace, with many organizations planning to up their budgets and diversify their tactics in the coming years. In fact, according to Forrester research, CMOs will spend nearly $119 billion on search marketing, display advertising, online video and email marketing by 2021. Over the next five years, search will lose share to display and social advertising while video will scale, Forrester said. These changes reflect a new emphasis on quality over quantity, a dynamic that will reintroduce human intervention into programmatic ad buying, turn marketers into growth hackers, and put long-tail publishers out of business. To keep pace with these trends and take advantage of growth opportunities, many marketers are wondering how they can best leverage their resources, tools and budgets. As a result, a question that has likely come up is: Should we make a new in-house hire to achieve our goals or is an agency partnership a better fit? As we close in on two decades of work in the digital marketing realm, our experience tells us there's no one-size-fits-all answer. Every organization is at a different digital marketing maturity level, which requires a tailored approach in order to scale their initiatives and drive results. So, before your post a job req or sign an agreement with an agency, ask yourself the following questions: #1 What are my marketing objectives? Your goals are the foundation of your marketing strategy, guiding every decision and tactic that comes next. As a result, evaluating your goals is a critical first step in weighing your hiring options. Essentially, you need to consider whether an agency or new in-house talent can put you in the best position to reach your goals. #2 What kind of expertise am I looking to add to the team? Generally speaking, most digital marketers have highly-specific skill sets. So, if your strategy calls for adding or expanding a specific area of expertise such as video production or graphic design, hiring in-house may be a great option. However, if you're looking for a jack of all trades, an agency will definitely be better equipped. Why? Because you'll be able to leverage a team of highly-specialized experts at once. #3 How niche is my industry? This one can cut both ways. If your organization is part of a highly-niche industry, you can certainly bring someone in who already has related experience or can be nurtured as an internal subject matter expert. That said, agencies are staffed with fast-learning individuals who can fill the SME role, too. So, this one may come down to preference and bandwidth. #4 What's the bandwidth of my current in-house team? Your current in-house team likely has a big workload and/or lacks the specific expertise needed to achieve specific goals. So, if you're looking to ease their burden or diversify a specific area of talent, hiring in-house talent is a great option - as long as you can commit the training, nurturing and management resources. If you're looking for a more hands-off option or can't commit resources to managing an in-house hire, an agency will likely be a better fit. Hiring in-house typically requires more time and resources to make them successful (i.e. onboarding and ongoing training), whereas hiring an agency could give you more flexibility. In addition, agencies can often take on short-term projects with tight deadlines. #5 What's my budget? Your budget likely has the final say in your decision-making process. So, using your answers to the previous questions, think about how you can best stretch that budget. Does it make financial sense to add new team members or to outsource to an agency? Get A Little Help in Answering These Questions You know you need to make a hire to achieve your goals. And it's a big decision. If you're wondering what an agency can bring to the table, we'd love to chat with you. You tell us your hopes, wishes, dreams, goals and needs, and we'll give you options and honesty.
In the beginning, there was the Motorola DynaTAC 8000X. And it was not good. By 2017 standards, it was barely a phone. Virtually zero coverage, a price tag that translates to almost $10,000 of today's dollars, and completely devoid of apps. It also had a 10-hour charge time that only translated...
Thanks to the emergence of technologies such as mobile personal assistants, Amazon Echo, Google Home, Cortana, and others, there's no doubt that voice search is on the rise. These days, consumers can send text messages while driving or use a mobile personal assistant to complete simple actions. In fact, Gartner predicts that about 30% of searches will be conducted without a screen by 2020. In addition, another study from ComScore, states that voice searches will account for nearly 50% of searches, too.
That means we marketers need to start thinking how we can get our content in front of our audience via voice search channels. While optimizing content for voice search can seem daunting, there are a few easy tips that can help you start gaining more visibility for those types of queries. Focus on Featured Snippets We continue to see featured snippets more and more in search engine results pages (SERPs). These SERP features show qualified results right on the SERP, which can lead to quicker answers to questions. In addition to speeding up the way people are receiving answers on Google, we know that featured snippets drive more organic website traffic, too. Featured snippets can help you leapfrog competition on a SERP to gain more visibility as opposed to only relying on a main keyword ranking. Here's an example of one of TopRank Marketing's own featured snippets.
Back in July, Britney Muller of Moz gave a presentation at MnSearch about the future of SEO. One area that she focused on was how to rank for featured snippets. The reason Muller focused on this area for the future of SEO was because voice search is fueled by them. With that in mind, she outlined what she thought was the top five ranking factors for featured snippets: [list] Links Quality content On-page optimization Engagement metrics Speed [/list] Each ranking factor is not new to the SEO industry, but they make sense to focus on. Links are still an important ranking factor, as well as content quality and on-page optimization. Engagement metrics and site speed have also been important, but the focus is increasing for these areas. Both areas relate to the experience on mobile devices, since that is where the majority of voice searches are coming from.
To find featured snippets to target, use tools like SEMrush or Ahrefs to reverse engineer the content. Most of the time, the featured snippets are around 40 to 50 words, so it is important to keep your content clear for the intent. To trigger a featured snippet, use conversational language and/or questions. A quick way to leverage question based featured results is to create a FAQ page with common questions about your business or industry. Use Conversational Keywords Speaking of conversational keyword queries, they help reveal the intent more clearly than the money" (or more traditional) keywords. This often leads to longer queries for voice searches. For example, a traditional "money" keyword may be something like "content marketing software." But a more conversational, voice search keyword query may be something like "what is the best content marketing software."
Google has been encouraging this type of behavior for years, especially with the Hummingbird update back in 2013. People communicate with conversations, not just keywords. Associating the right keywords with concepts helps the overall content quality as opposed to targeting only one or a couple keywords per page. So, it's important to identify the keywords that people search for, but focus on creating the content that is more conversational.
When it comes to local search, include keywords or landmarks that people in the neighborhood would use. That way, search engines can correlate the content with a geographical area, which can help increase the local visibility for that piece of content. After all, many voice searches are from people looking for directions to local businesses.
Another area to get more conversation queries is from your chat feature on your website (if you have one). People will use a conversational dialog when using a chat function, which could lead you to create content that your audience is directly looking for. Add Structured Data Markup Schema markup helps search engines understand what the content is on websites. By helping search engines understand the context of the content, they can provide more informative results for users. Adding schema markup for local businesses can help a business show up in local results for general business information. This information can be highly beneficial for voice searches for directions and phone numbers. Schema.org is a great place to start if you want to learn more.
If you have a brick and mortar location, you should add schema markup for each place and create a Google My Business listing (and other local citations) to help your audience find you. Voice searches greatly impact local SEO for review websites like Yelp and other third-party websites. Optimize your local citations to make sure they are all correct and consistent. Here's an example from Target. As you can see, the listing includes information on its headquarters and number of locations, as well as links to social profiles.
Beyond Voice Search While the rise of voice search deserves your attention and action, it's still just one piece of your content marketing strategy. As always, it's important to focus on creating content that helps solves your audience's problems.
From our perspective, by creating quality, conversational and structured content, you'll not only be optimizing your content for voice search, but for the future, too. Why? Because voice search is not the end of the search revolution.
Many factors can influence a customer's decision making. How can you get them to buy products from your company instead of your competition? You have to find a way to influence their opinions. You can achieve this by leveraging social proof. The goal is to create a positive perceptionof your company.It's power in numbers. Let's say a prospective customer is searching for a product online. They know what they want, but they're not sure which ecommerce store to buy it from. What are some things they may be looking for? Company A has over 500 reviews.Company B has only 7 reviews. Which one do you think has a better public perception?
Look at the factors in this graphic as a reference point. Obviously, the company with more reviews will seem more attractive to the new customers. That company feels more reliable. Over 500 people took the time to write a review, so they must be legitimate, right? Honestly, the quality of the product is irrelevant here. Company B could potentially have a far superior product, but if nobody knows about it, it's useless. Don't get me wrong: quality is obviously important. If you're selling a product that's faulty or has lots of problems, social proof can backfire. You may get hundreds of reviews, but if they're all negative, it could put you out of business. Regardless of your company type, industry, or current reputation, I'll show you how to improve your conversions by using social proof. Use celebrity endorsements Don't let the term celebrity throw you off. Unless you have lots of connections, it's probably not realistic for you to land a superstar like Jay-Z, Shaquille O'Neil, or Tom Cruise to endorse your product. If you want someone like Selena Gomezto recommend your company on her social profiles, it will cost you $550,000 per post. That's outrageous. Instead, look for regular people with large followings, especially on social media platforms like Instagram. Here's an example of how Boseused Russell Wilsonto create social proof:
Over 250,000 people viewed this video. If Russell Wilson says it works, then it must, right? That's the power of social proof. Keep in mind that the Federal Trade Commissionrequires social influencers to clearly disclose their relationships with brands they promote. That's why Russell used the #Ad hashtag in this post. I know what you're thinking. Maybe Russell Wilson isn't an A-list celebrity, but he's still an NFL quarterback. You can also find local celebrities or regular people with lots of social media friends. Browse through your followers.Do you see anyone with 10k, 20k, or maybe even 50k followers? Reach out to them directly to see if they'd be interested in becoming a brand ambassador for your business. You may even have better success with these people as opposed to celebrities with millions of followers. Why? It's easier for someone with 20k followers to stay more engagedwith their fans.
Get out there, and try to find people to endorse your brand and products. It doesn't have to be Justin Timberlake-anyone with a large social following can help you generate social proof. Proudly display your best numbers Let your numbers do the talking for you. How many people bought your product or downloaded your ebook? Tell your customers. Post this information on your website in real time. Here's an example from Nosto:
What screams social proof louder than 22 billion? Here are some other options you may consider using: [list] [*]How much money have people saved by using your business? [*]How many social media followers do you have? [*]How many customers have you served? [/list] But if you don't have impressive numbers, omit them. For example, let's say you have only 450 Instagram followers. That's nothing to brag about. First of all, if that's the case, you need to learn how to build a larger Instagram following. But don't include that number on your website. Instead, show off your strengths. If you have 30,000 followers on Twitter, that's something you'll want to showcase. Here's another example from Kissmetrics:
The homepage shows how many companies use their behavioral analytics and engagement platform. It creates social proof. If it said, 10 companies use our service, nobody would be impressed. But 900 is nothing to sneeze at.It's impressive. Take a look at your best numbers to see which ones are worth displaying on your website. Display visual proof of your product in action Photos arepowerful social proof. Images can help reinforce the idea that your product works. Remember the example of Bose we looked at earlier? Russell Wilson had the speakers under water. And it was effective. Why? Because it's one thing to tell people that something is waterproof, and it's another toshow them. That's why you should include before and after photos on your website. Proactivhas been doing this for years:
This page on their website encourages users to upload their own before-and-after photos. They want to hear from their customers because it will show any skeptics that the product works. It's a great idea. Plus, storytelling is an effective way to engage and persuadesomeone. Think about your brand, products, or services for a minute. What kinds of images would generate social proof? Let's say you're a carpet cleaning company. You could show dirty rug vs. clean rug. Before and after photos work well for anyone involved in the health, wellness, and fitness industry. Here's another example from a fitness company:
Do you look like the guy on the left? Well, we can make you look like the guy on the right. And we promise to do it in 90 days or your money back. It's an impressive marketing strategy. Visual evidence of your product working will improve conversions. Give your customers incentives for writing reviews Let's take our last point one step further. Sure, you can always post photos on your website. But they'll mean a lot more to prospective customers if they see reviews from other users. That's why people research companies on websites like: [list] [*]Yelp [*]Angie's List [*]Trip Advisor [*]Google Local [/list] Your company should have a profile on as many of these platforms as possible. This will increase your chances of getting more reviews. It's all about customer preferences. Some people may trust only Yelp reviews, while others will check your ratings on Google. If you have one but not the other, you're alienating potential new clients. Encourage people to upload photos when they leave a review.
Earlier we discussed how visual evidence could impact a buyer's decision making. Based on the graphic above, we know user photos are far more important when it comes to generating social proof. Customers may feel a professional photo on a company website could be glamorizing the product. To some extent, they're right. Obviously, you're not going to willingly share images that portray your business in a negative light. But customers feel they can trust other customers. Here's a helpful tip for convincing customers to leave reviews. Be direct, and ask for a review. There's nothing wrong with this approach. If you have a brick and mortar location, make sure your staff understands the importance of customer reviews. Before a customer leaves, train your staff to say, Don't forget to write a review on Yelp. If a customer bought something from your ecommerce store, send a follow up email with a direct link to your profile on a review website. Look how Zapposaccomplishes this with their email campaign:
The message is short and direct.All they're asking for is a review, nothing else. What's the incentive they offer? Help others. Make sure you give your customers a good reason to leave a review. Providing valuable insight to other consumers may work for people, but other customers may need some extra motivation. Here's an example from The Body Shop:
Let's be clear. You're not offering an incentive for customers to leave a positive review. Obviously, that's what you'd prefer, but you can't control that. Notice how The Body Shop just says, Tell us what you think. It doesn't specify good or bad. Either way, as a customer, you will get 10% off your next purchase if you write a review. This incentive can be the extra motivation customers need to generate social proof for your business. Create surveys and share the results Sometimes people won't take the time to leave a full review. It's understandable. You have to realize people are busy, and an incentive may not persuade all your customers. Here's where you can use a survey to your advantage. Rather than typing customized reviews, a customer can simply click on some predetermined survey responses. It's quicker, takes less effort, but can be just as effective. Here's an example from Nordstrom:
If you're not having much luck generating customer reviews, see if your customers will respond better to surveys. Get testimonials from experts in your industry Customer opinions are valid, but does the customer always know what they're talking about? An expert is another matter. If you have customers with high credibility, see if they are willing to give your business a testimonial. Figure out which experts in your industry may be relevant to include. For example, if you're a mattress company, getting a positive testimonial from a chiropractor makes sense. Other experts to consider for various industries could be: [list] [*]Lawyers [*]Doctors [*]Teachers [*]Physical therapists [*]Mechanics [/list] Here's an example from Kissmetrics:
Follow this template. Try to include the expert's: [list] [*]full name [*]company [*]title [*]photograph [/list] How did your company help them?Be specific. In the example above, the testimonial says 30% lift in conversions. All of these factors help contribute to social proof. You're allowed to brag Growing up, your parents may have told you not to brag. I'm here to tell you it's okay to do that. Let everyone know about your success and what you're good at. I'm not saying you should brag about how much money you made last month, but boast about anything that establishes your credibility. Were you featured in a respected publication? Did a popular website use your business as a reference or resource? Check out this example from Roma Moulding:
Forbes Mediais a global media, branding and technology company, with a focus on news and information about business, investing, technology, entrepreneurship, leadership and affluent lifestyles. They are recognized across the world. Getting featured on their website is a big deal. Don't be afraid to share information like this with your customers. If a company such as Forbes says you're legitimate, then you must be, right? That's the power of social proof. Come up with a customer referral program We've already established that customers trust other customers. Customer referrals can generate social proof. If someone had a bad experience with a brand, they won't recommend that company to their friends and family. If you get a referral from someone you trust, it implies they had a good experience. They want you to get the same positive interaction. Look at the impact referrals can haveon your business:
You increase the chances of getting a conversionthrough customer-to-customer recommendations. Let's take this a step further. Yes, your customers may love your business. But will they go out of their way to spread the word? Maybe. Like with reviews, sometimes people need some extra motivation. Offer an incentive, like Airbnbdoes:
It doesn't need to be over the top. Just give them some encouragement to share your brand with their friends. Trust me, it works. Use Facebook We've discussed the importance of generating social proof through Instagram and review websites such as Yelp or Google Local. But that's not enough. Encourage customers to review your brand on Facebook. Facebook has such a wide reach, you can't afford to leave it out of your social proof strategy. Think of it like this. How many followers do you have on Facebook? How many friends do your followers have? You're indirectly connected with all those people even if they don't follow you. If your customers comment and write reviews on your Facebook page, it will show up on the news feed of all their friends. It's great exposure for your brand. Here's something else to consider: Facebook is the top platform for positive reviews.
Comments on your Facebook page are more likely to paint your company in a positive light than on other review websites. How can you encourage people to write reviews on your Facebook page? Engage with your customers.Like their posts.Respond to their comments.Make sure your profile is active. All of these factors can help generate social proof on Facebook. Conclusion Customers trust other customers. One of the best ways to improve your conversions is by leveraging social proof. This strategy won't cost you anything. Sure, it might involve some promotional giveaways, but for the most part, it's free. Display your best numbers.Show your customers how many people visited your website or downloaded your app.It gives your company more credibility. You can also brag about certain achievements, like being featured in a popular magazine. Encourage customers to review your products.It's even better if they upload their own photos.People trust user photos more than professional ones. Images are a powerful way to prove your product works. Incorporate some visual demonstrations and some before and after shots whenever possible. Get an endorsement from a celebrity or expert.It doesn't have to be Brad Pitt, but find someone with a large social following and send them some free products. If you follow this advice, you'll create social proof for your product or service and improve your conversions. What incentive will you offer your customers to review your brand on Facebook?
If you were Amazon CEO Jeff Bezos, how would you structure your testing and experimentation process to drive growth? Let's look at what Bezos says about experimenting (emphasis mine): One area where I think we are especially distinctive is failure. I believe we are the best place in the world to fail (we have plenty of practice!), and failure and invention are inseparable twins. To invent you have to experiment, and if you know in advance that it's going to work, it's not an experiment. Most large organizations embrace the idea of invention, but are not willing to suffer the string of failed experiments necessary to get there. Outsized returns often come from betting against conventional wisdom, and conventional wisdom is usually right. Given a 10% chance of a 100-times payoff, you should take that bet every time. But you're still going to be wrong nine times out of 10. We all know that if you swing for the fences, you're going to strike out a lot, but you're also going to hit some home runs. The difference between baseball and business, however, is that baseball has a truncated outcome distribution. When you swing, no matter how well you connect with the ball, the most runs you can get is four. In business, every once in a while, when you step up to the plate, you can score 1,000 runs. This long-tailed distribution of returns is why it's important to be bold. Big winners pay for so many experiments. As CEO of Amazon.com, if not the world's first, than certainly the largest, and the most successful e-commerce business (which by now is involved in industries far beyond retail), Bezos convincingly puts forward the case for adopting a test culture in any e-commerce environment. In this post, we'll look at how you can structure your in-house e-commerce CRO program and create a testing plan that grows with your organization. You might not be Amazon but why not swing for the fences? Plan to Fail (and Learn From it) The process of conversion rate optimization, or CRO, aims to make e-commerce companies more profitable by increasing the proportion of purchasers to total visitors. A structured process - encompassing research and hypothesis creation, testing itself, and the prioritization and documentation of those tests - is crucial to creating a testing culture that produces sustainable long-term results. In most of these steps, the need for a plan is obvious. But most people don't plan for the testing phase. In fact, testing is frequently regarded as an end in itself. However, testing is just the culmination of the entire process that stands behind it. Its real end goal is to increase revenue. In the same way that it's not possible to formulate and create tests without prior research, it's also not possible to run tests without planning. And moving from conducting individual tests or a sequence of tests to full-scale, constantly active testing is what separates a one-off CRO sprint from a thought-out, deliberate CRO program. Guess which approach is better for establishing a testing culture that enables companies to grow while absorbing their mistakes? Making mistakes and failures as an integral part of growth means embracing the main components of any learning process. Each experiment, no matter how successful or unsuccessful, is a learning opportunity for you and your organization. Implementing and integrating the knowledge that results from your tests is one of the primary tasks of a viable CRO testing program. Just a few reasons you should structure and document your testing program [list] [*]Testing every aspect of your website also enables you to challenge your prior assumptions by grounding alternative assumptions in data - instead of opinions or wild guesses. [*]Experimentation allows you to estimate the results of all improvements in real time, without having to wait for the end of the quarter to see improvement (or lack thereof). [*]By applying deliberate structure to the testing process, you make it easier to follow, teach, and repeat. [/list] All of this makes conversion optimization testing a pivotal consideration for any business with ambitions of growth. One of the most efficient ways to set yourself up for e-commerce CRO success is to establish an ongoing process within your organization, with a specific, dedicated team. This requires you to consider CRO not as an a la carte service provided by an agency, but as an opportunity to institutionalize and embrace the CRO process. And it requires that you learn to conduct tests yourself. Why is a Testing Program a Necessity? Note: If you want to test one hypothesis at time, you can go ahead and skip this section. Why? If you're running one test at a time, your testing plan and program will be the same as the hypothesis prioritization list (which we'll talk about below). There's just one small issue that may bother you - the time required to put all your hypotheses to the test. If you choose to go the one-test-at-a-time route, be prepared to spend some time on the journey. The best-case scenario, if you have 25 hypotheses to test, is that you're looking at two years of testing. Why would it take two years? The recommended practice is to run each experiment for at least a month (or until the test reaches significance and/or covers a few buying cycles) to ensure valid test results. Significance is a statistical concept that allows you to conclude that the result of an experiment was actually caused by the changes made to the variation, and not by a random influence. It's key to ensuring that tests are actually valid and that their results are sustainable and repeatable. Alex Birkett, Content Editor for Conversion XL, explains the concept of significance a bit more in-depth: What we're worried about is the representativeness of our sample. How can we do that in basic terms? Your test should run for two business cycles, so it includes everything external that's going on: Every day of the week (and tested one week at a time as your daily traffic can vary a lot) Various different traffic sources (unless you want to personalize the experience for a dedicated source) Your blog post and newsletter publishing schedule People who visited your site, thought about it, and then came back 10 days later to buy [your product] Any external event that might affect purchasing (e.g. payday) The 1-month rule above holds true for most websites. Those with exceptionally high traffic (ranging into millions of unique visits) will undoubtedly be able to achieve significant results within shorter periods. Still, to eliminate every outside influence, it is best to let tests run for at least a full week or two. Say you have 37 different hypotheses to test. Your ideal aim is probably to create all 37 tests and conduct them all at once, as an alternative to going through the process of testing one by one. Sadly, this isn't possible either, for a different reason. Sometimes the experiments themselves will conflict with one another, limiting their usefulness or even invalidating each other's results. Since none of us want to be old men when our conversion optimization efforts reach fruition, we need an alternative. That's where the concept of testing velocity comes in. Testing velocity is an indicator of how many tests you conduct at a given time frame, such as a month. It is one of the metrics of testing program efficiency and higher the velocity you achieve, the quicker your program will bring increased revenue. Provided, of course, you do everything right. [center]This is the simplified process of creating a testing program The Building Blocks of Your Testing Program The main elements that will determine the dynamics of your testing program are: [list=1] [*]Traffic volume [*]Interdependency of tests [*]The ability to support the design and implementation of multiple tests at once (operational constraint) [/list] Let's quickly go through what each of these elements means. Traffic Volume Traffic volume is an obvious obstacle, since your website traffic will influence not only what types of tests you can run, but also how many concurrent tests, and which pages will draw enough traffic to support tests. Traffic volume is the reason to prioritize tests that have the greatest projected effect. Tests with higher expected lift have much lower requirements in terms of the sample size/traffic volume needed to reach statistical significance. In practice, this means that if we expect a test to result in an increase in conversions of, for example, higher than 25%, we will need fewer observations to confirm this expectation than if we were expecting a 10% increase. This is the consequence of using a T-test as the statistical engine for running experiments: the smaller the effect of a change, the larger the sample needs to be in order to eliminate all outliers and reach statistical significance and confidence. Interdependency of Tests The ability to run experiments concurrently is the function of each experiment's dependency on the others. What does this mean? The basic principle is that we want to test a new page treatment on the maximum available number of visitors. If you happen to set up an experiment that will filter people out of the next experiment, then you will not be abiding by this basic principle. If your visitors are split 50% on an initial page, meaning that half do not get to see the next page that's also being experimented on, you will not have a valid test result. For example, you may want to improve your funnel. So you create experimental treatments (variations) that will run on two different steps of the funnel. This may mean that the visitors that are shown one page do not get to see the other - because the experiment's outcome has influenced how many people get to see the other experiment you are running. Your sample will automatically be 50% smaller, meaning the test will have to run twice as long as it otherwise would have needed to achieve significance. [center]Running concurrent experiments can cause interdependency issues To prevent this issue, estimate the interdependency risk prior to creating an experiment, and run interdependent experiments separately. You can sometimes solve this issue by using multivariate tests (MVTs), but sometimes your traffic volume will preclude this. Additionally, too many variants in MVTs can invalidate the experiment results.
Did you know? With the Kissmetrics A/B Test Report, you can see how a test impacts any part of your funnel. Running a test on your homepage and want to see how it affects lead quality at the bottom of the funnel? Find out in 10 seconds with the A/B Test Report.
Operational Ability - How Many Tests Can You Design and Actively Run? In an ideal world, we would all be testing all the hypotheses we've created just as soon as the research is complete! However, creating and running an experiment is hard work. It requires efforts from multiple people to create a viable and functional test. Once the research results are in and you have framed your hypothesis, the experiment won't just spring into existence. Making an experiment requires preparation. At minimum, you need to: [list=1] [*]Sketch out an updated visual design, which you'll use to create a mockup or high-fidelity wireframe [*]Create an actual design based on the mockup [*]Code the design/copy changes [*]Perform a quality assurance check and do a dry run before the test is live [/list] All this requires time and effort by a team of people, and some of the steps cannot even begin before the previous ones are complete. This is your operational limitation. You can overcome operational limitations by either hiring more people or limiting the number of tests you run. Adjust Testing for Outside Influences While it would be great if every experiment happened in a vacuum, this just isn't the case. Website experiments performed for the purposes of conversion optimization will never enjoy the controlled environment of scientific experiments - where the experimenter can maintain control on all other influences outside of the one being intentionally changed. However, we can at least account for obvious or expected test influences, such as holidays that affect the shopping habits of our customers or other predictable events that may change buyer behavior. By taking these factors into account when framing your plan, you can adjust for this and run the experiments at a time when the risk of outside influence is smaller. Even More Benefits of Creating a Testing Plan Having a testing plan not only makes your CRO process faster and more effective - it has a number of important additional benefits. Let's start with the benefit that's most important in the long run. A test plan structures and standardizes your approach, making it repeatable and predictable. An active, structured testing process with no expiry date essentially creates a positive feedback loop, so that even when your testing plan reaches its conclusion, you'll feel encouraged to seek new challenges and run more tests. In the long run, this leads to the establishment of a bona fide testing culture within your organization. A structured process also allows for better feedback on the results. At each phase's conclusion, you can review the results, update your expectations for the next phase, or adjust experiments that failed in the previous phase. In effect, you're learning as you go. Finally, a testing plan just plain-and-simple allows for better reporting and makes a more persuasive case for conversion optimization as an organizational must. If you are able to report progress in monthly increments, with results clearly attributed to experiments (which were built on hypotheses, which were derived from research), you're much more likely to gain organizational support for your CRO program. A testing plan creates clear milestones and enables the research team to accurately track progress, plan future activities, and remove potential bottlenecks in deploying and implementing experiments. That way, the chance that the testing process may spiral out of control is completely sidestepped, and each team member's role is clear. How to Structure Your Testing Plan We've just explored why you need to make a testing plan prior to actual testing - let's call that step zero, if you will. Now let's talk about the nuts and bolts of creating that plan. First, figure out what type of test(s) (A/B test, MVT, or bandit) you'll run. Test type determines how much traffic you need, as well as the development effort necessary to deploy experiments. Next, you need to carefully estimate the interdependency of your tests and make adjustments to your priority list if any tests clash with each other. Finally, to determine the number of experiments you can run, estimate how many you can effectively support with available staff. Take into account that you need to have researchers framing hypotheses, designers and front-end developers to create variations and setup the experiment itself. Since each of these groups will have a number of tasks to attend to, you need to make sure you run only so many tests that your staff can support. To ensure this, start by going through your list of hypotheses. If you prioritize tests accurately according to the effort necessary to deploy them, you'll already have many of the inputs for your test plan. Ultimately, your testing plan should take the form of Gantt charts, which are very helpful in indicating the time frame for each test phase. [center]A test program is usually presented in the form of a Gantt chart A test phase contains all the tests that can be run simultaneously. For example, if you discover you can run four tests simultaneously, and you have 22 tests to run based on your hypotheses, you'll have 5 test phases. Your test plan should also list every proposed test and provide the following concise information for each: [list] [*]Related hypothesis (the why of the test) [*]Required sample size [*]Expected effect [*]Who will be the subject (target segment or audience) [*]Where it will run (URL of the page) [*]When (the time period in which it will run) [*]Rough description of changes (the what of the test) [*]How to measure success (what metrics the experiment should improve/affect to be considered a success) [/list] If you structure your testing plan this way, you will maximize your test velocity and allow for maximum efficiency of your optimization program. How to Prioritize and Assign Testing Tasks Once you create and structure a plan, the only remaining ingredient necessary for success is to actually run through the process. Obviously, both to secure the greatest possible revenue and to create initial confidence, the first tests you run should be those you expect to have the greatest effect. Select the hypotheses that have high importance (for example, issues that affect your users' movement through the funnel); that you are most confident will work; and that require the least effort to implement. You can choose a prioritization model to apply to hypotheses during the research process. Apply the model properly and if your estimates are correct, you will almost certainly get the results you're looking for. For each experiment to succeed, you need to translate hypothetical solutions into practical web page designs as accurately as you can. When you have a mental image of the variation you want to test, translate that into a visual image using a wireframe or mockup. Hand that off to your designers, who can turn it into an actual web page. While the visual design is being prepared, your front-end developers need to check if any additional coding will be necessary to implement the variation. The most important part of implementing an experiment is to ensure that it's set up free of any technical issues. Do this by making quality-assurance protocols and checks part of your testing program. Once a given step in the experiment development cycle is complete, staff involved with that step can immediately start working on the following experiment. Having a plan enables them to advance further without any delay, and adds to the efficiency of your conversion optimization effort. Establishing a Culture of Experimentation Building a testing culture is the main objective of a structured CRO process. A testing culture requires the company to make a switch from a risk-averse and slow-decision-making mindset to a faster, risk-taking approach. This is possible because testing enables you to make decisions based on measurable, known quantities - in effect reducing your risk. Extensive research is a necessary prerequisite of successful A/B testing (which is something that hopefully, a majority of people involved in testing already understand)! Suffice it to say that the role of research is well publicized, and there are a number of articles about it. We will also assume that by now, you know how to frame a hypothesis from this research. The hypothesis creation process is just as important to the ultimate success of your CRO effort as running the tests themselves. Only properly framed, strong hypotheses will result in conclusive A/B tests. In a structured CRO effort, no element should be left to chance. Extend the same careful treatment to actual testing as you afford to research and hypothesis creation. Once you've properly prioritized your hypotheses by the effort each will take, their importance, and their expected effect, you need to prepare your tests with the same forethought. How you approach setting up your testing program will greatly impact your end results. The aim of every good testing program is to attain the maximum test velocity and see meaningful test results in the shortest possible time. About the Author: Edin abanovi is a senior CRO consultant working for Objeqt. He helps e-commerce stores improve their conversion rates through analytics, scientific research, and A/B testing. Edin is passionate about analytics and conversion rate optimization, but for fun, he likes reading history books. He can help you grow your e-commerce business using Objeqt's tailored, data-driven CRO methodology. Get in touch if you want someone to take care of your CRO efforts.
If marketers are going to start relying on artificial intelligence, they need to learn how to trust it. Contributor Matt Zilli discusses how transparency will help marketers understand what AI can do for them here and now.
You might be part of the group of marketers that feel as though your email campaigns are missing something. Only, you're not sure what they're missing. You've reversed engineered your competitor's email campaigns to see what they're doing, but the truth of the matter is, you will never know the strategy behind their success because you don't have access to their analytics. So you end up in a cycle. You create emails, you write good copy and add relevant graphics, just like the guides tell you to, but you still don't see the kind of results everyone talks about. Email marketing is consistently one of the best marketing avenues to use.
So why aren't you seeing the same results? Many marketers make the mistake of not paying close enough attention to their email marketing analytics. If you're a marketer who isn't using data to fuel and guide your email-marketing campaigns, you're leaving serious money on the table. Data allows you to see what does and doesn't work so you can optimize your emails to perform better. It's a tricky, but rewarding process and involves taking raw data and turning it into actionable insights to help improve your email-marketing campaigns. Doing so will put you leagues above your competitors. In this post, I'm going to explain the importance of using analytics to improve the way you segment your emails, improve the email content you send out and create winning email campaigns. It doesn't matter how brilliantly written your emails are, or how many well designed images they contain if you don't see any results or can't measure whether your efforts are helping you achieve your overarching goals. Let's dive in! Choosing a Vendor Looking at the current landscape of email marketing and the software available is often overwhelming. If you've already chosen, and are happy with your provider, move on to the next section. If we look at the email marketing software market radar below, it's clear to see there are a number of different vendors to choose from. Choosing an email service provider largely depends on what you hope to achieve and what feature(s) you're looking for.
Source: Email Marketing Market Research, Crozdesk Taking into account vendor size and the strength of the solution may help you evaluate which vendor to choose from based on your business' personal requirements. For example, if you're looking to send automated, triggered email messages, you might use a tool like Kissmetrics Campaigns or you might choose to use a provider like Sendgrid if you're looking to just send newsletters. The issue, though, is although your choice of vendor will have some say in the types of campaigns you can run, they only go so far with providing you an honest view of how your campaigns are performing and what you need to do to improve them. If you are looking to improve your email-marketing campaigns, you need to consider utilizing analytics to provide you with the core insight into how your current campaigns are performing against your preset goals. Know Your Goals Before Choosing KPIs Before you begin, think about what you hope to achieve from it. You need to set goals. Where most marketers go wrong is thinking their goals should be things like: [list] [*]Increase open rate [*]Increase click-through-rate [*]Reduce the number of people who unsubscribe [/list] Although these are some good metrics to follow (more on that later) they're not goals. Your goals should align with your business goals. For example, you might choose to do email marketing in the hope of generating more leads, growing your subscriber base or converting more leads into customers. Note: you can have more than one goal, but you'll have to tailor each metric to each individual goal. When you've chosen the goal of your campaign, it's time to work out which metrics you should be using to track the progress of your goal. [center]Image Source For example, 73% marketers identified click-through rate as being one of the most useful metrics for measuring performance. But let's think about that for a second. Say you're the marketing manager at a SaaS company, you might want to increase your open and click through rate. The problem is, open rate and click-through rates are known as process metrics. They indicate the order of events that occur from when an email is sent to when it reaches the subscriber. But they shouldn't be goals in and of themselves. Now if we reframe the situation and change our goal to: increase the number of free trial sign ups. The reason isolating metrics is counter-productive is because it doesn't give you the full picture. Within your last campaign, suppose you increased your clickthrough rate. You might think that's good, but the key question you need to answer is, did that increase the number of free trial signups? If the answer to that question is no, you need to work out why. If it did increase the number of free trial sign-ups, can you correlate that to your click through rate? Now, you can see how things like changing your email subject can have a direct effect on your click through rate, which in turn has a direct effect on your conversions. The key is to not take each metric as an individual number, but to use these process metrics and incorporate them into your overall marketing strategy to increase your revenue, or whatever your end goal might be. If your goal is to attract more visitors to your website you probably want to focus on growing your subscriber list. So this is the metric you need to be following. But what if your goal is to increase the number of leads generated? If this is the case, you should be tracking how many leads you're capturing each day/week/month. Choosing the metrics to follow largely depends on what sort of business you're running. A SaaS company might have different goals than an e-commerce company who also might have different goals to a non-profit. Moving Beyond Basic Data If you want to win at email marketing, you need to think seriously about your analytics. There is a lot to track, so I've broken the core analytics down to focus on into three categories: basic, advanced and expert, with each getting harder to come by as you go up the scale. Basic metrics Basic metrics are easily accessible and are also known as behavior metrics. Most basic email service providers will give you some information around these metrics. They include things like: [list] [*]How many people open your emails? [*]How many people click your links? [*]Which links get the most clicks? [*]What's the most common time people open your emails? [*]How many people unsubscribe (on average) from each email you send? [/list] You might already be looking at behavior metrics to improve your campaigns. But you're ruining your chances of developing a winning strategy if this is the only data you consider. What's the point in having 100% open rates if no one purchases? Something has obviously gone wrong and understanding analytics further will help you understand why and where it all went wrong. An open case for advanced email metrics The thing about the basic metrics like click through and open rates, they're basic metrics and simplistic. In that whilst they tell you who opened the email and who clicked through, they don't tell you much else. Moving beyond these basic metrics, consider your click-to-open-rate. This metric tells you how engaging your email content is. It helps you understand whether the content of your email resonates well with your specified target segment. Working out this metric will provide you with a percentage of your subscribers who opened your email and also clicked on a link. It helps give you a clearer idea of the entire story.
So if one of your goals is to create engaging content, your aim should be to increase this percentage. Your click-to-open rate gives you an indication of how your subscribers behaved when they opened your email. It gives you a complete, holistic view of how your email content is performing. For example, you might have a low click-through rate, but you can still have a solid click-to-open-rate. If you judge your emails on just one metric, you won't get the full picture.
When you create a Kissmetrics Campaign, you set a Conversion goal. If the users you sent these emails to convert, they'll count in this converted list. So for example, if you send out an email to people about a sale, you can select your Conversion as Purchase. If they read your email, then go on to Purchase, they've converted. Advanced metrics The advanced metrics looks at the results of your campaigns. They help you answer things like: [list] [*]How many people actually purchased one of your products or services after clicking on your email? [*]How much money do you make on average per email campaign sent? [*]How much (on average) does each subscriber bring you in revenue? [*]How many of your email subscribers convert into an actual lead? [*]What is your ROI? [/list] Expert metrics Expert metrics are also referred to as experience analysis. Experience analysis explains why your subscribers do what they do. Expert metrics are important because they show you what drives your subscriber's decisions and the motivations behind the choices they make when they choose to engage or ignore your email. Instead of just knowing how many of your emails within a specific campaign were opened, you'll understand why they have a higher rate. You'll have a greater understanding why revenue is higher or lower at certain parts of the year, for example. Now the issue is, for this area of analysis, you probably won't be able to gather this data from your email provider. You'll have to look further afield to get into your audience's mind and understand exactly what makes them tick. It's no lie that understanding the behavior analysis is important, but it only goes so far. If you want real insight you need to know whether the people who are engaging with your emails are doing so because they're bored on the train to work, or whether it's because you framed your message right and they're interested in doing business with you. Using Your Data Now that you've gathered the right data, it's time to start listening and drawing the right conclusions. When you have collated the right data from your email campaigns, you'll be able to send better campaigns by first creating data-driven customer personas. You'll now identify who to target, when and why you should target this person and send them content you know will be useful to them. For a second, let's think about our own email inbox. How many times per week do you receive irrelevant emails that seem as though they have nothing to do with you? How many times a week do you consider, or actually unsubscribe from email newsletters? If everyone used their data to fuel their marketing campaigns, they'd have less people unsubscribing. Using a tool like Kissmetrics Campaigns will enable you to send automated, triggered emails based on user's previous behavior. The beauty of these emails is that they're not cold and they're not unwanted because they're based on previous behavior. These emails are in place to nudge the user towards something, whether that be purchasing, logging in, etc.
When you start to use the right tools to get the right data you'll be able to: Define and segment your audience Who is your audience, and what sort of emails do they want to receive? When you're defining your audience, let's not forget about your original goals from the beginning. In the example below, Pets At Home, a pet retailer, use the name of the pet within their email copy. They also ascertain exactly what type of pet you have whether that be cat, dog, rabbit etc. to ensure they only send you relevant targeted emails that you're likely to open.
If you don't segment your emails, you will end up sending general emails that attempt to appeal to everyone but end up appealing to no one. It's shocking to think there aren't more marketers segmenting their audience based on data because segmented emails generate 58% of all email revenue. When you choose to segment your audience you improve the personalization of the emails you send. You can segment your audience by demographic data such as: [list] [*]Age [*]Income level [*]Gender [*]Occupation [*]Marital status [/list] But most importantly, if you want real success, look at how your audience is behaving and segment based on that in relation to your overall goals we spoke about before. You might consider things like customer type, spending history, adoption status etc. Targeted, personalized content Once you've segmented your audience, you'll be able to send specific relevant content to different cohorts of people. 74% of marketers say targeted personalization increases customer engagement. Target messaging involves having an understanding of your audience and tailoring content and offers that speaks to them at the different stages of their journey with your brand. In simple terms it means using the information about the audience within that segment to guide your message. If you're a SaaS company and you have a segment of subscribers who have yet to try your software, sending them an email letting them know there's another chance to get a free trial will obviously be more relevant than sending that email to someone who is already making great use of your software. Email Marketing Shouldn't Happen in Silos As we've said, email marketing shouldn't happen in isolation to your other marketing efforts, they should all be connected. It should be there to support your overarching, larger goals. Often, your email audience will be prompted to visit your website after reading an email. It's important to continue looking at the data once they land on your website to see if the whole cycle from email, to lead to conversion could be improved. Use a heatmap tool like CrazyEgg to see where your visitors are clicking on and interacting.
Doing so means the hard work isn't lost by a poor landing page that doesn't perform. What's more, if you're already using Kissmetrics campaigns, you can use the platform to track website behavior too. Having a tool that tracks both the way your audience are interacting with their emails and your website will give you a much clearer idea of what is and isn't working. You'll not only get to understand the behavior, but you'll be able to see what they actually did on your site and see exactly who they are. Testing and Analyzing Even after you've defined your overarching goal and the metrics you need to follow to achieve that, you should always be testing. Because your email-marketing campaigns are now data-driven you will have a clearer idea of what elements you should test. Focusing on the data will give you a clearer idea of what elements you should be testing. If your goal is to increase landing page sign-ups, you might decide to track your open and click-through rates. If you notice you have low open rates, but high click through rates, that should tell you that the content of your email is good, but you need to improve your subject like to encourage more of your subscribers to open your email. Analyzing your results in this way will improve your campaigns. It will give you a clearer idea whether or not you're focusing on the right metrics and also whether the things you're doing to improve your campaigns are actually working. In short, look at the metrics you've chosen, compare those to the desired goal and devise a list of ways to improve next time. Takeaways How do you measure your success? Do you look at your open and click rates? Do you look at the number of people who unsubscribe and hope it's lower than your last campaign? If you do any of these things, you're utilizing the basic core metrics most email marketer's use. But you're ignoring the most important and critical metrics that will actually enable you to improve your email marketing. Finding data isn't hard and most email providers will offer some sort of analytics data in order to understand how your current and past campaigns are performing. And for some marketers just looking at your open rate or click through rate is perfectly ok. But what challenges most email marketers is finding advanced data and finding specific data to make the right changes to campaigns. This post outlined how to define email marketing goals and use those goals to define which metrics you should be concerned about. I've also explained why you need to look beyond the basic metrics to gain helpful insights into your subscriber list and how they behave. So, now you should be able to leverage your own email data to improve how your email-marketing campaigns perform. What ways have you utilized email marketing analytics to your advantage? Leave a comment below. About the Author: Jordie Black is a content marketer and strategist helping startups and SaaS companies in the B2B space improve the way they connect with their audience through content. Learn more about her at www.jordieblack.com or follow her on Twitter @jordieiam to keep up with her updates.
Need help understanding Machine Learning? We now live in an age where machines can teach themselves without human intervention. Sound scary? It should. Scary amazing that is. Applications for machine learning extend from marketing to medicine to interstellar space travel. Find out what Machine Learning is, how it works, and how it will change the world. Infographic.
Google's New AI Is Better at Creating AI Than the Company's Engineers.Google CEO Sundar Pichai says his team has achieved AI inception with AutoML. AutoML is an artificial intelligence that can assist in the creation of other AIs. By automating some of the complicated process, AutoML could make machine learning more accessible to non-experts.Futurism Survey: 37% of online retailers started holiday preparations earlier this year. How early you ask? 1 to 4 months earlier than 2016 according to a survey by BigCommerce. Along with early, retailers are optimistic. 88% expect an increase in holiday revenue.Marketing Land Oh joy (sarcasm) Facebook is bringing paywalls to Instant Articles in your mobile feed.Since more people than ever before are getting their news from social media, it makes sense that Facebook wants to help publishers by introducing subscriptions for content on its platform. And it's starting on mobile.The Next Web Digital Video Marketing Is A $135 Billion Industry In The U.S. Alone, Study Finds. Video capturing, creation, hosting, distribution, analytics and staffing is big business! In contrast, advertisers are expected to spend $83 billion on digital ads and $71 billion on TV commercials (a total of $154 billion) in the U.S. this year.Forbes Businesses can now sign up to add booking buttons to their Google local results.Google has finally added a feature to let you easily add a 'book online' button to your local business on Google Maps or Google Search. Soon, some businesses might not even need a website.Search Engine Land Snap is turning to programmatic ads for Snapchat shows.Advertisers can make programmatic buys on Snap Ads - 10-second vertical video units - across the app's public user stories, Snapchat-curated live stories and Discover publisher channels and Snapchat shows.Digiday
As Amazon Prime Hits 90 Million, Online Holiday Spending To Surpass Brick-And-Mortar.Deloitte predicts people will do 51% of their holiday spending online, making it the first time it may surpass in-store spending. Among high-income families that number jumps to 57%. Headed to the mall anyone? Pass.MediaPost Facebook officially rolls out its discovery-focused 'Explore Feed'. The Explore Feed is now fully rolled out on mobile and beginning to show for desktop users.In case you didn't know, the Explore Feed is to help Facebook users discover more content across the social network, beyond posts from friends and Pages you already follow. TechCrunch Google Attribution Rolls Out To Thousands Of Marketers.Google is rolling out an attribution model it introduced in May, powering the platform with machine learning. Google Attribution is to help marketers analyze how top and middle funnel clicks and interactions impact conversions across channels. MediaPost Facebook Is Testing a Pinterest-Like Feature Called Sets.Oh look, Facebook has taken a break from imitating Snapchat and LinkedIn to imitate Pinterest. Facebook is now testing Sets, Pinterest-like themed collections that include status updates, photos, videos and links, and that can be shared with all friends or specific friends.AdWeek Snapchat dangles referral traffic with link sharing from other apps. This is such foreign territory for me, but go ahead, read on anyway:You now can share links from other apps via the iOS share sheet, allowing you to send a private message with the link to one or several people. And rather than just turning live location sharing on or off permanently, you now can opt to hide in Ghost Mode for 3 or 24 hours.TechCrunch Pinterest moves into paid search: What you need to know.Pinterest Ads Manager is now open to all businesses who have opened an account and uploaded at least one Pin. It's time to fire up those experimental paid search budgets.Search Engine Watch The B2B CMO's Growth Strategy Turns Audience-Centric Over Product-Centric. B2B CMOs around the world are focusing on new buyers and new markets over new offerings when it comes to their growth strategies, a new study from SiriusDecisions has found. An enhanced customer experience is seen to have the biggest influence on growth strategies in the next 2 years.MarketingCharts
This morning I will be joining a sold out crowdto celebrate the 100th Social Media Breakfast Minneapolis St. Paul (SMBMSP #100) event. The plan is a panel with Greg Swan and Jennifer Kane moderated by Mykl Roventine. We'll be talking about what has changed since the event started in 2008 (founded by Rick Mahn) and what lessons we've learned as well as thoughts looking forward. I have a feeling it will be a great collection of stories about successes, failures and the crazy world that social media has become. If you're reading this post early on Friday, you can follow the event from 8-10am CT on Twitter with the hashtag#smbmsp100 What was the top digital marketing news story for you this week? Be sure to stay tuned until next week when we'll be sharing all new marketing news stories. Also check out the full video summary with Tiffani and Josh on YouTube.
It's important for you to always try to improve your email marketing strategy. The trends continue to change each year, and you need to adapt. If you're still sending out the same boring newsletter or promotional offer you used 5 years ago, it's time for you to make some improvements and adjustments. But where do you start? You may want to try testing a couple of different templates or designs to see which one is the most effective. A/B testing is not strictly for people who want to update their old email strategies. It's great for business owners and marketers who are actively trying to keep up with the new trends as well. Making minor changes to your subject lines, color scheme, CTA buttons, and design could drastically improve your conversions. If you've never attempted to A/B test your marketing emails, I'll show you how to get started. Test only one hypothesis at a time First, decide what you want to test. Once you decide what you're testing, come up with a hypothesis. Next, design the test to check that hypothesis. For example, you may want to start by testing your call to action. Let's look at how Optimizelytested their CTA button.
These two messages are identical. The only thing that changed was the wording of their call to action. They didn't change the color, design, heading, or text of the message. Optimizely simply tested Watch Webinar against View Presentations & Slides. The results were drastically different. Subscribers clicked on the variation nearly 50% more than the control group. You may want to run further tests on other components of the message. So, now that Optimizely knows which variation produces the most clicks, they can proceed with testing different subject lines that can increase open rates. Where do you start? Before you can come up with a valid hypothesis, you may need to do some research. Decide which component of your subject line you want to test. Here's some great data from Marketing Charts.
Based on this information, you could A/B test the number of characters in your subject line. You already know that subjects with 1-20 characters produce the most opens. Take that one step further. Your hypothesis could be that 11-20 characters will produce more opens than 1-10 characters. There's your variation. Let's say the first thing you tested was a CTA button, like in the Optimizely example. Now, you can move on to the subject line. If you tested the CTA and subject line at the same time, you wouldn't know which one was the biggest factor in your results. You can't effectively test a hypothesis with multiple variables. Testing one thing at a time will ultimately help you create the most efficient message. How to set up your A/B email tests All right, now that you know what to test, it's time to create your email. How do you do this? It depends on your email marketing service. Not all platforms give you this option. If your current provider doesn't have this feature, you may want to consider finding an alternative service. I'll show you the step-by-step process of running an A/B test throughHubSpot's platform. Step #1: Select Email from the Content tab of your Marketing Dashboard
Your marketing dashboard is pretty much the home page for the HubSpot account. Just navigate to the content tab and select Email to proceed. Step #2: Click Create email
Look for the Create email button in the top right corner of your page. Step #3: Create your A/B test
Once you name your email campaign and select a template, next you'll see the editing tab. Click on the blue Create A/B Test button on the left side of your screen. Step #4: Name the variation
By default, this popup will have the name of your campaign with (Variation) after it. But you can name it something more specific based on what you're testing. For example, you can name it September News CTA Button Placement instead. Step #5: Change the variation based on your hypothesis
Now you can edit the two messages. Remember, the content should be identical. Change only the one thing you're testing. Step #6: Choose the distribution size of the test groups
50/50 is the best distribution. But if you want to modify it, drag the slide bar to change the distribution ratio. Step #7: Analyze the results
After you send out the test, HubSpot's software automatically generates a report. Based on the test we ran, Version B had a higher open rate. So, that must be the clear winner, right? Not so fast. It was higher by less than 1% compared to the control group. The difference isn't significant enough to declare a definitive winner. It's an inconclusive test. That's OK. These things happen. If the results are within 1% like in the example above, it's pretty clear they are inconclusive. But what about 5%? 10%? Or 15%? Where do you draw the line? You need to determine your natural variance. Run an A/A test email to determine this. Here's an exampleof an A/A test on a website:
The pages are the same. But the one on the right saw 15% higher conversions. So that's the natural variance. Use this same concept for your email campaign. Send identical emails to see what the open rates and click-through rates are. Compare that number against your A/B test results to see if your variance results were meaningful. Test the send time of each message Sometimes you need to think outside the box when you're running these tests. Your subject line and CTA button may not be the problem. What time of day are you sending your messages? What day of the week do your emails go out? You may think Monday morning is a great time because people are starting the week ready to go through emails. But doing further researchsuggests otherwise.
It appears more people open emails in the middle of the week. You can run a split test between Wednesday and Thursday or Tuesday and Thursday to see which days are the best. Take your test one step further. Hypothesize what time you think your subscribers will open and click in your message. Studies showpeople are more likely to open an email in the afternoon.
Between 2:00 PM and 5:00 PM is the time when you'll probably see the most activity. Take this information into consideration when you're running an A/B test. Your opening lines are essential Earlier we identified the importance of testing your subject line. Let's take that a step further. Focus on the first few lines of your message. Most email platforms give the recipient a preview of the message underneath the subject. Here's what it looks like on a user's phone in their Gmail account:
Play around with the opening lines of your message. It's a great opportunity to run an A/B test. Look at some of the examples above. Banana Republicdoesn't mention the offer in the first few lines. Why? Because it's written in the subject line. It would be redundant if they included that information again in the first sentence. But if you keep reading, there's probably room for improvement. The next part of the message tells you that you can see all the images on their mobile site. That may not be the most efficient use of their preview space. There's one way we can find out for sure. Run an A/B test. Changing your opening lines can help improve open ratesby up to 45%. Manually running an A/B test As I mentioned earlier, not every email marketing platform has an A/B test option built into their service. Other sites besides HubSpot that have an A/B test feature include: [list] [*]ActiveCampaign [*]Campaign Monitor [*]MailChimp [/list] But if you're happy with your current provider and don't want to switch for just one additional feature, you can still manually run an A/B test. Split your list into two groups, and run the test that way. It's possible you already have your contacts segmentedby other metrics.
This can help increase open rates and conversions. But it's also an effective method for analyzing your hypothesis. You'll have to create two separate campaigns and compare the results, which is completely fine. You just won't see the comparison side by side on the same page as we saw in the earlier example. If you're doing this manually, always run your tests simultaneously. Running tests on separate occasions could impact the results based on time, which plays a major factor in the analysis. Test a large sample size. This will help ensure your results are more accurate before you jump to definitive conclusions. Running a manual testdoes notmean you should test more than one variable at the same time. Stick to what we outlined earlier, picking a single variation for each test. Experiment with the design of your email campaigns Once you have your subject line, opening sentences, and calls to action mastered, it's time to think about your existing template. You can keep all your content the same, but change the layout. Here are some examples of different templates from MailChimp:
What do all of these templates have in common? The word count. None of these templates give you space to write long paragraphs because it's not effective. Keep your message short. Research from Boomerangsuggests that your email should be between 50 and 125 words.
The messages in their test sample got at least a 50% response rate. While you're experimenting with template designs, you can also try different images. Try one large background image with text written over it. Another option is to include a picture within the content. Your A/B template test can help determine which method is more effective. Swapping out one image for another is something else you can test. For example, if you're using a picture of a person, test the difference between a male and female. Conclusion A/B testingworks. If you used these tests to successfully optimize conversions on your website, the same concept could be applied to your email marketing strategy. Before you get started, come up with a valid hypothesis. Don't start changing things without a plan. Test onlyone variationat a time. After you've come up with conclusive results for your first test, you can move on to something else. Try testing your: [list] [*]Subject line [*]Call-to-action wording [*]First few sentences of the message [*]Day and time of sending the email [*]CTA button placement [*]Templates [*]Images [/list] The email marketing service you're currently using may have an option for you to run and analyze the results from an A/B test. If not, it's no problem. You can manually run an A/B test by creating two separate groups and two different campaigns. This is still an effective method. A/B tests will help increase opens, clicks, and conversions. Ultimately, this can generate more revenue for your business. How will you modify the call to action in the first variation of your A/B test?