Unlocking AI potential in e-commerce and online business – opening the black box

Welcome! Our “Unlocking AI potential in e-commerce and online business” series aims to provide basic guidance on applying AI (Artificial Intelligence) in businesses which use the internet as a primary source for delivering value to their customers.

In the first three articles we focused on AI essentials – terms, buzzwords, project management and data selection. Now, it’s time to dive a little deeper. The number of companies which have adapted AI (or have at least tried to) has been growing steeply in the past few years.

  • R&D spending has dramatically surpassed advertising spending, according to this article, published by Harvard Business Review in May this year

Following this trend, more and more questions emerge regarding usage of AI:

  1. Interpretability of machine learning
  2. AI and ethics
  3. Reliability and fairness when classifying people data

Let’s have a look at the first of these; enjoy your reading!


Are you new to AI and Machine Learning? If so, we recommend the first article in our “Unlocking AI potential in e-commerce and online business” dealing with basic concepts.


Behind the scenes

In the last article in our “Unlocking AI potential in e-commerce and online business” series we already suggested how to turn abstract business knowledge into machine learning variables. Using the Fictional Online Fashion Store example, we illustrated how to think about data sources when making a tool for website content personalization and product recommendation. Imagine we successfully trained the model. Now, why does the model behave the way it does? Besides the input data, it depends on the model architecture we have chosen.

Similarity matrix and clustering

From one point of view, we can think of intelligence as a process of labeling input data. Each millisecond, our brain processes perceptions from our senses (data) and assigns them a category or state based on our knowledge and historical experience. Mathematically speaking, it’s basically a search for arguments which maximize a given probability function. If we choose associative modeling as our machine learning strategy, we can represent this probability function using a heat map or network graph.

The following picture illustrates how a simple similarity matrix may look from the inside. The machine learning of such a model could be done as follows: if a customer purchases product 1 together with product 2, we increase the value of the matrix at coordinates [1, 2] (and [2, 1] accordingly). Each value in the matrix then represents a relevance between products on corresponding indices based on historical purchases. A bigger value means that a customer that has already added product 1 to his cart, will be more likely to buy product 2 as well if we offer it to him.

The best combination of products to offer is represented by indices of maximum values for corresponding rows.

Heat map, similarity matrix and relevance between products
Heat map – an easy way to illustrate relevance between products

A slightly different visualization could be made using a graph to capture relationships between products. Each point in the graph will represent one particular product. In this case, we can think of machine learning as moving points in the graph closer or further apart using data about purchases. After some time, points will form clusters suggesting which products should be offered together.

Network graph, relationships between products and clusters
Network graph is a good way to visualize relationships between products and clusters

Random forest

Another method our brain uses to make decisions is a subconscious creation of stereotypes. When we experience the same situation with the same result over and over, the probability of this sequence of events surpasses an imaginary threshold. If this occurs, our brain creates a rule which replaces computation of probability for this case in the future. We can think of it as an optimization of our brain computation capacity. In other words, stereotypes help us to make decisions more quickly and efficiently. A machine learning counterpart of this principle is a random forest.

A random forest is nothing more than a set of several IF-THEN decision trees. What is interesting is the process of how these trees and rules are made during training. The result of the machine learning process is determined by several hyperparameters. The most important ones are:

  • amount of decision trees in the model
  • maximum depth of each tree
  • minimum amount of data points required to split an internal node
  • minimum amount of data points required to be a leaf node

During the machine learning process, an algorithm looks for features and values which could be used to split data points into tree branches while meeting conditions specified by chosen hyperparameters.


How to deal with data selection and knowledge representation in AI projects? You can find a few tips in the third article in our “Unlocking AI potential in e-commerce and online business” series!


For instance, when predicting next purchases, we can think of this machine learning process as splitting customers into segments based on historical interactions or the amount of recent purchases.

The limit for the minimum amount of data points in tree branches and leaf nodes is actually the encoded probability threshold mentioned earlier. The maximum tree depth represents the level of granularity and the amount of decision trees implies how many different problem solutions will be combined into the final result.

One of the trees in a random forest model
One of the trees in a random forest model which was trained to predict future purchases

Therefore, the output of the trained model corresponds to relationships between data points and internal rules and splits reveal important features and threshold values. That’s why we can use a random forest not only as a final solution, but also as a very efficient data exploration method in the initial steps of AI prototyping. Strong features and rules could be extracted from the model and used on their own as a decision support tool.

Neural network

Mathematical representation of a neuron
Mathematical representation of a neuron

The first model of an artificial neuron was created by Warren McCulloch and Walter Pitts in 1943. It was a simple mathematical representation of a signal transfer between real neurons in the human brain.

Since then, multiple neural network architectures have been made, nevertheless, the core principle has remained the same. Perceptions from senses are transformed into a signal which causes a chain activation of particular neurons. Neuron activity is determined by a transfer function and its arguments (weights), which are formed during the learning process.

Neural network and human brain

Neural networks are usually considered a black box. The main reason for that is the fact that it is extremely hard to explain why particular neurons have been activated after the learning process when a particular input is presented to the network.

To some extent we can use biology (or neuroscience) for inspiration. When studying the human brain, scientists measure the activity of different parts of the brain in response to a particular perception. Likewise, we can log activation of artificial neurons and explore which part of the neural network is responsible for a specific decision.


How to successfully execute AI-related projects? You can find a few tips in this article in our “Unlocking AI potential in e-commerce and online business” series!


The question is, could we use such information to estimate the behavior of the neural network in the future?

This topic is still being studied. For example, a recent study conducted by Google AI scientists examines the possibility of using attribution modeling and neuron activation monitoring to create human-readable insights about the decision-making process of the neural network during image classification. The method transforms the importance of image pixels (input neurons) into the importance of image features, for example how important is a “stripe” feature for the neural network when classifying a zebra picture.

I know what is inside the AI, what next?

One reason why interpretability of machine learning models is becoming a big issue is the increasing interest in the discussion about the ethical aspects of AI. What can we do to prevent the development of a biased and unfair mathematical model? Don’t miss our next article!

Subscribe

Please select all the ways you would like to hear from us:

You can unsubscribe at any time by clicking the link in the footer of our emails. For information about our privacy practices, please visit our website.

We use Mailchimp as our marketing platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp's privacy practices here.

Unlocking AI potential in e-commerce and online business – the right data

The Right Data

Welcome! Our “Unlocking AI potential in e-commerce and online business” series aims to provide basic guidance on applying AI (Artificial Intelligence) in businesses which use the internet as a primary source for delivering value to their customers. In this article, we focus on how to think about data and data sources in a way to get the most out of machine learning models. Enjoy your reading!

Garbage in – garbage out in AI-focused projects

AI is not just one particular computer program or algorithm. Instead, it’s a system of several mathematical models. The ability of this system to emulate the human decision-making process relies on the right input information. Companies often suppose that more data leads to a better model. Unfortunately, it is not that easy.

Big Data does not necessarily mean you will have a good AI model

The result depends on how well the data covers key drivers of the problem we are trying to solve. If successful, AI can save tens of percent. McKinsey Global Institute published the results of its research which attempts to simulate the impact of AI on the world economy:

  • $13 trillion – that’s the amount of additional global economic activity which could be delivered by 2030
  • 20 % – that’s the amount of decline in cash flow from today’s levels nonadopters might experience by 2030, assuming the same cost and revenue model as today

Are you new to AI and Machine Learning? If so, we recommend the first article in our “Unlocking AI potential in e-commerce and online business” dealing with basic concepts.

Garbage in - Garbage out

A very common beginner’s mistake is spending most of the time on developing complex algorithms instead of brainstorming about the right data.

  • A simple model based on good features always beats a complex model based on poor features

Collecting data at random with no idea of how exactly to utilize it often leads to data inconsistency and bad data structure. Features extracted from data collected in such conditions are almost unusable for any kind of analysis or machine learning. Having some data is better than having no data, but, without proper business case, mathematical modeling is like looking for a needle in a haystack.

Data worth storing

By building AI for e-commerce, we often automate tasks and procedures which are being done manually by domain experts. A common way to utilize data is a data analysis done by hand followed by some action in response to the new insights. Thanks to machine learning, we can replace this process by a mathematical model which performs an action directly based on real-time and historical data.

  • The right data for machine learning is often that which yields similar information to what a human expert would use to solve the problem

Data and ontology

Let’s think about how this works in a real case. In our last article we introduced a fictional company called Fictional Online Fashion Store which would like to personalize its website content in real time and offer customers relevant products. We already illustrated how to approach the project from the management point of view. How to handle the data selection process? How would a human expert proceed in a regular brick and mortar fashion store? When a new customer comes in, a salesperson performs a quick brainstorming:

  1. Is it a new or an existing customer?
  2. Is it a man or a woman?
  3. How old is he/she?
  4. What is his/her fashion style?
  5. How wealthy is he/she?
  6. Does he/she view goods slowly or quickly?
  7. Does he/she view goods which share a particular feature?
  8. Does the current day, time or season matter? Is late evening shopping typical for some particular customer segment?
  9. What does our competition offer in this area?
  10. What is popular among celebrities and influencers?

These brainstorming questions form the basis for an ontology – a definition of entities, attributes and their relations in the given context.

In our case we have at least two entities – customers and products. Each customer has such attributes as shopping history, gender, age, fashion style, income and shopping behavior (questions 1-7). Similarly, each product in our fictional fashion store has such attributes as fashion style or target customer segment.

The values of these attributes are used by the human expert (salesperson) to recommend relevant products in the given context (questions 8 – 10). How do we represent such information using available data sources?

Ontology and data context

Knowledge representation

Let’s split the information mentioned earlier into several segments:

  • Demographics – customer gender, age and income
  • Shopping behavior history – customer fashion style based on his or her clothes and accessories, history of purchases in our Fictional Online Fashion Store and history of interaction with newsletters and marketing campaigns
  • Current shopping behavior – real-time interaction with the Fictional Online Fashion Store website (viewing products, banner clicks)
  • External context – shopping and fashion trends for the current season or time, what the competition offers, trends among celebrities and influencers

When building an AI model, we need to think about the significance of each information segment and how well we can cover it using available data sources.

Demographics

Demographics such as age or income could be tricky to obtain due to the consumer privacy and data protection regulations. We can use a registration form but we never know whether the customer is telling us the truth. Luckily, we can estimate this information using correlated data.

We often know the location, mobile device type and operating system which reveals the possible income category. An iPhone owner from a big city probably spends more on fashion compared to a cheap phone owner from a rural area. In the case of an existing customer we can combine this information with historical purchases to get a more precise estimate.

Based on the visit time and location we can guess the age group. The daily schedule of an economically active population differs from the schedule typical for kids or seniors.


How to successfully execute AI-related projects? You can find a few tips in this article in our “Unlocking AI potential in e-commerce and online business” series!


Shopping behavior history

It is almost impossible for an online store to guess the shopping behavior history of a new customer. The only information available is usually the mobile device type and operating system. Thankfully, it’s much easier once the customer starts interacting with the website and marketing content. We can begin to create a customer profile starting with the first page view.

Current shopping behavior

Similarly, current shopping behavior trends could be obtained in real time by tracking website interactions. We can look for a similarity in attributes among viewed products, compute the number of visited product categories or count the time spent on product details.

External context

The toughest one is data representation of the external context. In fact, there is literally a countless number of things which could be relevant to a customer’s preferences and behavior. From a technical point of view, less abstract information is easier to represent. In our case, this could mean web scraping selected blogs and newspapers associated with the target demographic group and extracting particular keywords. In a similar way we can track what is offered by selected competitors. Other useful context information could be a weather forecast.

Now we know where to find the right data, what next?

If you want to unlock the whole potential of your data in AI, you need a system to process it, analyze it and most importantly, to use it in AI models. We will cover all these aspects in our series “Unlocking AI potential in e-commerce and online business”. Stay tuned!

Subscribe

Please select all the ways you would like to hear from us:

You can unsubscribe at any time by clicking the link in the footer of our emails. For information about our privacy practices, please visit our website.

We use Mailchimp as our marketing platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp's privacy practices here.

Unlocking AI potential in e-commerce and online business – project management

Welcome! Our “Unlocking AI potential in e-commerce and online business” series aims to provide basic guidance on applying AI (Artificial Intelligence) in businesses which use the internet as a primary source for delivering value to their customers. In this article, we focus on the processes and workflow in AI-related projects. Enjoy your reading!

AI project life cycle

A common beginner’s mistake is considering an AI project to be a regular software development project. That’s why we think it’s important to point out how these two differ in the workflow, goals and deliverables.


Are you new to AI and Machine Learning? We recommend the first article in our “Unlocking AI potential in e-commerce and online business” dealing with basic concepts.


AI development is not just software development

Sometimes we work with companies that are trying to use tools and procedures typical for engineering processes when it comes to prototyping an AI. In those cases, software engineers are in charge of analytical tasks. This often leads to a wrong goal metric, missed deadlines and time wasted on unnecessary meetings.

Software development is like building a house

Software development is like building a house. The goal is obvious from the very beginning, defined via product features whose specification usually does not change during the whole development process. Milestones are clear, progress can be easily measured and each product feature can be tested using unit tests. You can easily split independent tasks among several development teams.

AI development could be compared to solving a puzzle

AI development, on the other hand, is iterative research which could be compared to solving a puzzle or finding a path in a maze. The primary goal can be changed repeatedly based on new findings from the data. Instead of product features, there is the precision and consistence of mathematical models which can be tricky to measure correctly. Success or failure depends on the quality of the data and the ability to simulate reality. All these aspects mean you need to bear in mind greater uncertainty which is caused by data variability and changing conditions. In any case, you will gain valuable insights which will help you with strategic decisions and push your business forward. When it comes to creating a team, quality is much more important than quantity. Two or three experts who are familiar with the business domain and state-of-the-art technologies always outperform an army of freshers and junior analysts. That’s because AI projects are about the right ideas and inventive thinking, not just about the number of man-days spent coding.

If you want to leverage AI in e-commerce and online business, you need the right tools. One of them is a good understanding of AI project processes.

CRISP-DM and its role in AI projects

Do you already have some experience with analytics? If you do, you might be familiar with CRISP-DM (Cross Industry Standard Process for Data Mining). It’s been around since 1996, originally crafted by five companies: Daimler AG, Integral Solutions Ltd (ISL), NCR Corporation, OHRA and Teradata. CRISP-DM is a widely used process model which describes workflow typical for almost any data-related project. Thankfully, AI projects are in many ways similar to regular data-related projects.

CRISP-DM
CRISP-DM
  • A good process for AI development enables quick orientation in the project life cycle and helps you not to forget about important things.

At pbi.ai we use a three phases methodology inspired by CRISP-DM:

  1. Discovery phase – The first phase focuses on problem understanding and initial data analysis. The main purpose of this phase is to assess available options, make a decision regarding the overall strategy and define clear, quantifiable and measurable goals.
  2. Prototyping phase – Mathematical modeling, simulations and experiments with AI algorithms happen in the second phase. Most of the time is usually spent on feature engineering and optimization of machine learning hyperparameters.
  3. Deployment phase – In the final phase, codes of the AI prototype are optimized and deployed into the production environment. Depending on the situation, the final AI program is used as a stand-alone module or integrated into the streaming pipelines and ETL procedures (ETL – Extract, Transform, Load).
pbi.ai approach to project cycle

Real case scenario

Now that you know the basics, you might be wondering how this works in the real world. Imagine a fictional company named Fictional Online Fashion Store which would like to personalize its website content in real time and offer customers relevant products. Now let’s see how to approach the project from the management point of view.

Discovery phase

The essential part of each AI project is understanding the problem. We’ve seen many companies underestimate this and dive head-first into coding complex mathematical models and torturing data to achieve performance slightly better than a random guess. Instead, the most valuable activity at the start of the project is brainstorming and thinking about knowledge representation.

One of the most important tasks is to create an ontology. Ontology is a description of entities and their relations. In our case, this could mean performing a revision of the product tagging, defining custom variables for online tracking and creating a nomenclature for sequences of customer online actions.

Another part of this phase is crafting a data strategy. We need to analyze available data sources and decide whether we are able to get sufficient information about online customer behavior and the data context. In terms of content personalization, it would be good to check the availability and consistency of the historical online events data. It would also be useful to perform a basic correlation and clustering analysis to get initial insights. This will help determine the most promising direction for the next steps and define baseline goals. A well-defined goal in our case could be, for example, to increase the number of product detail views in the target customer segment, during a particular period of time, in the selected location and validated through A/B testing.


How to deal with data selection and knowledge representation in AI projects? You can find a few tips in the third article in our “Unlocking AI potential in e-commerce and online business” series!


Prototyping phase

Once an overall strategy is crafted, we can proceed to the next phase. Prototyping is a repetitive hypothesis-driven research process focused on finding the best performing combination of mathematical models and features.

In the case of our Fictional Online Fashion Store and the website content customization, we will probably dig into the transaction history. Good features enable a simple and robust AI model to be developed. A simple model based on good features always outperforms a complex model based on bad features. It is also much better in terms of future maintenance and adjustments.

In our case, we need to find patterns in the online shopping behavior and to identify customer and product attributes linked to the buying decision process. Besides features for our model, we will get valuable insights for other marketing tasks like session scoring or lead scoring.

When dealing with online data, make sure its context is examined as well. Common external factors include weather, what competitors offer or trends among celebrities. All these aspects may influence the customer in a particular season or time period and cause anomalies in the data. If we manage to represent this information in our feature vector, we are on the right track to a machine learning model with consistent performance.

Deployment phase

The final phase covers implementation of the AI prototype into the production environment and its validation on real data. AI models, for the most part, work as stand-alone modules. Thus, the deployment phase is usually about API and a database scheme to store the model values.

Another task typical for this phase is the development of ETL pipelines to clean data and prepare features for machine learning. It is useful to integrate some logging and define inputs for reporting so that performance can be measured and recorded continuously.

Once everything is ready, we can test the model, check whether it met the goals and decide on the next steps. Make sure you perform A/B testing from time to time to check the model performance. The external context should be reviewed on a regular basis as well in order to identify new factors which influence customers.

I know how to manage the AI project, what now?

Are you interested in learning more about AI and how it can benefit your business? Now that you know the basics of AI project management, you might be interested in how to select the right data sources and machine learning algorithms for particular use cases. We will cover all these aspects in our series “Unlocking AI potential in e-commerce and online business”. Stay tuned!

Subscribe

Please select all the ways you would like to hear from us:

You can unsubscribe at any time by clicking the link in the footer of our emails. For information about our privacy practices, please visit our website.

We use Mailchimp as our marketing platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp's privacy practices here.

Unlocking AI potential in e-commerce and online business – basic concepts

Welcome! Our “Unlocking AI potential in e-commerce and online business” series aims to provide basic guidance on applying AI (Artificial Intelligence) in businesses which use the internet as a primary source for delivering value to their customers. In this article, we focus on explaining some terms and buzzwords which are often used in connection with AI. Enjoy your reading!

How can we profit from AI?

This is a question many CEOs and CMOs are asking us. No wonder, according to a recent article published by Google:

  • 85% of executives believe AI will allow their companies to obtain or sustain a competitive advantage
  • 66% of marketing leaders agree automation and machine learning will enable their team to focus more on strategic marketing activities

How to successfully execute AI-related projects? You can find a few tips in the second article in our “Unlocking AI potential in e-commerce and online business” series!


AI has been around for 60 years and evolved from simple rule-based systems into advanced mathematical models, capable of adapting to changing conditions in a real environment. Powerful hardware in combination with data availability enables machine learning solutions to be developed for almost every e-commerce and online business. Nevertheless, there are several questions each executive should ask before starting an AI-oriented project:

  • What exactly does AI mean in the context of our business?
  • Which phases will the AI project have and what kind of preparation is needed?
  • Should we build an in-house team? What tasks should we outsource?
  • How do we measure progress?
  • Will the investment into AI pay off?

The answers to these questions leads to a precise plan of how to get value from AI and machine learning; we will help you get them.

Is AI just a fancy name for statistics?

Is AI just a fancy name for statistics?

One issue with Artificial Intelligence is that there is no exact definition of what it really stands for. The most common approach is to relate AI to human behavior:

Artificial Intelligence is the science of making machines do things that would require intelligence if done by men.” (Marvin Minsky, 1967)

But here comes the problem – is human behavior always intelligent? Can we expect “rational” behavior from our customers when developing a predictive machine learning solution? The most important part of each AI project is the initial business analysis which should reveal general aspects of the problem we are trying to solve. We want experiments and simulation to be as realistic as possible, otherwise, we are at risk of developing an inconsistent mathematical model with poor performance in a real environment. The amount of customer irrationality and the level of abstraction of the business case indicates the requirements on the solution.

Human behavior vs. intelligent behavior

Based on the complexity, we can split AI projects into four categories:

  1. Rule-based systems without internal representation of the real world: simple IF-THEN rules; e.g. an e-mailing program which sends messages based on the day of the week and the customer demographic characteristics
  2. Rule-based systems with internal representation of the real world: similar to the previous one, but enhanced by definition of states and relations between the entities in the environment; e.g. a recommendation engine recommending products to the customer based on his or her recent online journey
  3. Machine learning systems without continuous learning: a mathematical model is trained on historical or prepared training data; no further learning is done after deployment; e.g. a random forest for churn prediction trained on the historical behavioral data
  4. Machine learning systems with continuous learning: a mathematical model is updated continuously, using recent or real-time data; e.g. a dynamic customer segmentation used by social media companies

Therefore, as a scientific discipline, AI covers everything from simple decision trees to advanced mathematical models.

Sometimes people ask us “Hey, can we borrow your AI for a while and test it?” – we should point out that AI is much more like an approach to problem-solving instead of one particular computer program or algorithm. From a philosophical point of view, we can define two types of AI:

  • Weak AI: a simulation of human decision making process using mathematical models and available data – this basically covers all instances and implementations of AI nowadays
  • Strong AI: a machine or algorithm which realizes its own existence, has its own desires and goals and can reproduce itself (make another machine or algorithm with similar properties) – this is the subject of philosophical and scientific debates and the inspiration for Sci-Fi books and movies, nevertheless, we will probably not see anything like this in the near future

How to deal with data selection and knowledge representation in AI projects? You can find a few tips in the third article in our “Unlocking AI potential in e-commerce and online business” series!


How to leverage AI in e-commerce?

There are many drivers of success in e-commerce and internet companies. Based on data sources and the nature of the AI solution we can distinguish two main directions:

  1. Improvement of customer satisfaction
  2. Optimization of internal processes

1. Improvement of customer satisfaction

A happier customer leads to bigger profits. Better personalization and customer support leads to a happier customer. AI could be a real game changer in these areas.

Marketing persona
Marketing persona

The most commonly known type of solution is probably a product recommendation engine which chooses products to show to the customer based on their transaction and browsing history. In some cases, that alone can boost online sales by tens of percent. Besides the product recommendation, almost every piece of online communication, including banners, navigation and messages, could be personalized in real time using the right data.

Advanced algorithms can dynamically define marketing personas or identify the ones you have already created. It is much easier to increase sales when you know which prospects are ready to buy. Customer segmentation is a big deal and can be a basis for the whole marketing strategy.

2. Optimization of internal processes

Another way to leverage AI in e-commerce is to use NLP techniques (Natural Language Processing) for semantic modeling and key-word optimization. This will help you create product tags and categories automatically. Many companies are doing this manually, employing several people to read product descriptions each time a new product comes out. This is not just time consuming, it also results in the creation of lots of synonyms among tags and keywords, which makes further semantic analysis almost impossible and reduces performance of the search engine. With NLP, you can standardize the whole process, save time and money and improve the product search for your customers.

NLP techniques - word embedding
NLP techniques – word embedding

Many E-commerce companies use a BI (Business Intelligence) solution to support their strategic decisions. If you are one of them, it may interest you that more and more manual BI related tasks are being replaced by machine learning driven automation. In fact, it is quite easy to delegate simple BI tasks to AI algorithms so that your experts can focus on the tough ones. Typical areas for this kind of optimization are pricing, buying, logistics and warehousing. You can also use smart algorithms as an alarm system to detect unusual development among your data, either for discovering possible threats or interesting business opportunities.

Should we implement AI in our company?

If you want to benefit from AI, you need a few things; proper understanding of an AI project life cycle, the right data and the right people. We will cover all these aspects in our series “Unlocking AI potential in e-commerce and online business”. Stay tuned!

Subscribe

Please select all the ways you would like to hear from us:

You can unsubscribe at any time by clicking the link in the footer of our emails. For information about our privacy practices, please visit our website.

We use Mailchimp as our marketing platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp's privacy practices here.