Conjoint Glossar
A
ACA (Adaptive Conjoint Analysis)
A conjoint analysis technique developed by Rich Johnson in the 1980s. It customizes the interview process for each respondent and is particularly suited for situations with more attributes than can feasibly be handled with traditional conjoint analysis (TCA). ACA focuses on the attributes most relevant to the respondent, reducing cognitive overload by presenting only a few attributes at a time.
ACBC (Adaptive Choice-Based Conjoint)
A proprietary multistage approach from Sawtooth Software that integrates a “Build Your Own” section, a binary choice component, and a choice tournament into a conjoint analysis model. This method adapts the questions according to the respondent’s previous answers in order to gather more detailed and meaningful information during the interview than traditional choice-based conjoint (CBC).
Allocation
A term used in conjoint analysis to describe the process of distributing a limited number of points or resources across attributes to indicate preferences. This method helps quantify the relative importance of the features of a product or service.
Alpha Draws
The sample-level part-worth values in hierarchical Bayesian (HB) regression. This is also referred to as the upper level of the hierarchy and corresponds to the multinomial logit (MNL) model of the population (see Upper-Level Model).
Alternative Specific Constant
A parameter capturing systematic differences between choices that are not explained by the observed attributes. Alternative specific constants can be used to adjust models to account for inherent biases toward specific alternatives such as brands or outside goods.
Alternative Specific Design
A method used in conjoint experiments in which each alternative has a unique set of attributes, allowing for flexible representation of complex product scenarios. This approach is valuable for customizing the choice context in detailed market simulations.
Ant Colony Optimization
An optimization algorithm inspired by the foraging behavior of certain ant species. These ants guide other individuals to follow optimal paths by depositing pheromones. Ant colony optimization applies this concept to solve complex optimization problems by simulating pheromone-based pathfinding to identify the best solutions.
Asymptotic Posterior
The limit of a Bayesian posterior distribution as the sample size approaches infinity. Under certain conditions, the posterior is asymptotically normal (a result known as the Bernstein–von Mises theorem).
Attribute Importance
A measure of the relative weight or significance of each attribute in a conjoint study. This is used to determine which product features most influence consumer choices, guiding product development and marketing strategies.
Autocorrelation
A mathematical tool for finding repeating patterns, such as detecting the presence of a periodic signal obscured by noise or identifying the missing fundamental frequency in a signal implied by its harmonic frequencies. Autocorrelation plays an important role when analyzing the stability and convergence of Markov chains in hierarchical Bayesian regression.
B
Backpropagation
A gradient-based optimization method widely used for training neural networks, which enables the computation of parameter updates by propagating error gradients backward through the network.
Bagging (Bootstrap Aggregating)
An ensemble learning method that trains multiple models independently on random subsets of a dataset and combines their predictions, typically through voting for classification or averaging for regression, to improve accuracy and reduce variance.
Bayesian Estimation
A statistical approach used in conjoint analysis to update beliefs about preferences using observed data. It is particularly useful for handling individual-level variation and producing robust estimates.
Bayesian Information Criterion
A statistical metric used to evaluate the goodness of fit of a model while penalizing the complexity of the model to avoid overfitting.
Bayes’ Theorem (Bayes’ Law, Bayes’ Rule)
A mathematical rule named after Thomas Bayes for inverting conditional probabilities, allowing us to find the probability of a cause given its effect.
Behavioral Economics
The study of the psychological (e.g., cognitive, behavioral, affective, social) factors involved in the decisions of individuals or institutions and how these decisions deviate from those implied by traditional economic theory.
Best–Worst Scaling (MaxDiff Scaling)
A method in which respondents select the most and least preferred options from a set, providing clearer preference distinctions. This approach is highly effective for ranking and prioritizing attributes.
Boosting
An ensemble metaheuristic used in machine learning primarily to reduce bias (as opposed to variance). It can also improve the stability and accuracy of machine learning classification and regression algorithms.
Bradley–Terry–Luce Model
A model for paired comparison data that is able to obtain a ranking of objects that are compared pairwise by subjects. The task of each subject is to make preference decisions in favor of one of the objects.
Brute Force Search (Exhaustive Search, Generate and Test)
A widely applicable problem-solving technique and algorithmic paradigm that involves systematically checking whether each possible candidate satisfies the problem statement.
Build Your Own
A technique used in questionnaires that allows respondents to configure their own products.
Burn-In Period
The preliminary steps during which a Markov chain moves from its unrepresentative initial value to the modal region of the posterior. In realistic applications, it is common to apply a burn-in period of several hundred to several thousand steps for the Monte Carlo Markov chains in a hierarchical Bayesian model.
C
Card Sort Conjoint
An early conjoint technique in which cards were created for the attribute/level combinations according to the experimental design, which respondents were asked to rank in order of purchase preference.
Choice-Based Conjoint
A decompositional method in which the importance of the individual characteristics and their attributes can be deduced from the overall evaluation of certain products.
Choice Simulator
A tool that allows researchers to predict market share and simulate consumer choice under different product configurations. It is instrumental in forecasting demand and optimizing product designs.
Conditional Pricing
A strategy in which the price of a product or service depends on certain conditions, such as purchase volume, customer characteristics, or additional features selected.
Confidence Interval
A range of values derived from sample data that is likely to contain the true value of a population parameter with a specified level of confidence, typically expressed as a percentage (e.g., 95%).
Constant Sum
A scaling technique in which respondents allocate a fixed number of points across attributes to express their relative importance. This method provides a direct measure of attribute significance in preference formation.
Contingent Valuation
A survey-based economic method used to estimate the value placed by individuals on non-market goods or services, such as environmental benefits or public resources, by eliciting their willingness to pay or accept compensation.
Covariance Matrix (Variance-Covariance Matrix)
A square matrix whose off-diagonal entries are the covariances between the respective pairs of variables in a dataset, capturing the direction and strength of their linear relationships, and the diagonal elements are the variances of the respective variables. The covariance matrix of the upper-level model is one of the important results obtained from hierarchical Bayesian estimation.
CVA (Conjoint Value Analysis)
A legacy conjoint analysis package from Sawtooth Software, based on the original ratings-based conjoint approach from the 1970s.
Cross Effect
The influence that the attributes or changes in one product or variable have on the demand or perception of another product or variable in a market (i.e., cross prices).
D
Decision Tree
A graphical representation of decision-making processes, in which nodes represent decisions or conditions, branches represent possible outcomes, and leaves represent final outcomes or classifications.
Deep Learning
A subset of machine learning that uses neural networks with multiple layers to model and analyze complex patterns in large datasets, often achieving high performance in tasks like image recognition, natural language processing, and prediction.
Design Block
A subset of experimental conditions or stimuli grouped together to reduce variability and ensure more efficient comparisons by accounting for specific factors, such as time, location, or participant characteristics.
Design Version
see Design Blocks
Dirichlet Distribution
A multivariate probability distribution commonly used in Bayesian statistics, representing probabilities for a set of outcomes that sum to one, often serving as a prior for categorical data.
Discrete Choice Model
A model used to explain or predict a choice from a set of two or more discrete (i.e., distinct and separable; mutually exclusive) alternatives. It can be used for both stated and revealed preference data.
Draws
In context of probability or simulations, „draws“ might mean sampling values from a distribution to compute the function’s output for those inputs.
Dual-Response None
An approach in which respondents are first prompted to choose among the available alternatives and then asked, “Considering only the option you selected, would you choose/purchase it at all?”
Dummy Coding
The use of binary indicators (0 or 1) in regression models to represent categorical attributes in conjoint analysis. This allows researchers to quantify the impact of specific attribute levels on consumer choice.
E
Effect Coding
The use of the values 1, 0, and −1 to represent categorical predictor variables in various kinds of estimation models, conveying all the necessary information on group membership.
Endogeneity
A situation in which an explanatory variable in a model is correlated with the error term, often due to omitted variables, measurement errors, or simultaneity, leading to biased and inconsistent estimates.
Exhaustive Search
see Brute Force Search
Experimental Design
The process of planning and structuring experiments to systematically test hypotheses, control for variables, and ensure valid, reliable, and unbiased results.
Eye Tracking
A technology that measures and records eye movements, gaze patterns, and fixation points to understand visual attention and cognitive processes during tasks or interactions. Eye-tracking techniques can be combined with conjoint experiments to sample additional information.
F
First-Choice Rule
A decision-making model in which individuals select the option with the highest perceived utility or preference among all available choices.
Fractional Factorial
An experimental design method that tests only a subset of all possible factor combinations, strategically selected to provide insights while reducing the number of experiments needed.
Full Factorial
An experimental design method that tests all possible factor combinations.
G
Game Theory
A branch of mathematics and economics that studies strategic interactions between decision-makers, with the outcome for each participant depending on their own choices and the choices of others.
Genetic Algorithm
An optimization technique inspired by natural selection that uses processes like selection, crossover, and mutation to iteratively evolve solutions to complex problems.
Gibbs Sampler
A Markov chain Monte Carlo (MCMC) algorithm used to generate samples from a multivariate probability distribution by iteratively sampling each variable conditional on the others. Most hierarchical Bayesian estimations use Gibbs sampling.
Greedy Search
An optimization algorithm that makes the locally optimal choice at each step but is not guaranteed to find a global optimum.
Gumbel Distribution
A probability distribution used to model the distribution of the maximum or minimum of a sample of random variables, commonly applied in extreme value theory for events like floods or financial crashes. The Gumbel distribution is one of the concepts underlying random utility theory.
Gumbel Error
Random noise modeled with the Gumbel distribution, often used in discrete choice models to represent unobserved factors affecting decision-making or preferences.
H
Heterogeneity
Variation or differences between individuals, groups, or entities in a population, often considered in models to account for diverse behaviors, preferences, or characteristics.
Heterogeneity
Variation or differences between individuals, groups, or entities in a population, often considered in models to account for diverse behaviors, preferences, or characteristics.
Hierarchical Bayes
An estimation method used in conjoint analysis to estimate individual preferences by combining respondent data with population-level information. It provides highly granular insights into individual and group-level preferences.
Hierarchical Linear Model (Multilevel Model)
A statistical method for analyzing data with nested structures, allowing for the examination of relationships at different levels (e.g., individuals within groups) while accounting for variation at each level.
Holdout Choice Task
A choice task excluded from the estimation process that is used to validate a conjoin model’s predictive accuracy in reflecting preferences by testing it against unseen concepts.
I
Importance (Relative Importance)
A measure of the size of the role played by an attribute of a product or service when making a purchasing decision (see Attribute Importance).
Importance Sampling
A Monte Carlo method used to estimate properties of a target distribution by sampling from a different distribution and reweighting the samples according to their likelihood under the target distribution.
Incentive Alignment
The process of structuring rewards or motivations to ensure that the interests and actions of individuals or stakeholders align with the desired goals or outcomes of a system or organization.
Independence of Irrelevant Alternatives
A property stating that the relative odds of choosing between two options remain unchanged when a third, irrelevant option is added to or removed from the choice set, assuming a logit model framework. This is also known as the red bus/blue bus property.
In-Sample Data
The data used during model training or analysis, in which performance metrics are evaluated on the same dataset to assess how well the model fits the observed data.
Interaction Effect
A situation in which the impact of the level of one attribute on a choice depends on the level of another attribute. Identifying these effects in conjoint analysis provides nuanced insights into how attribute combinations influence consumer preferences.
Interpolation
The process of estimating unknown values within the range of a set of known data points by using mathematical methods, such as linear or polynomial interpolation.
J
K
L
Latent Class Analysis
A method that identifies subgroups within a sample sharing similar preferences, helping uncover hidden market segments. This is a valuable tool for tailoring strategies to specific consumer groups in conjoint studies.
Latent Dirichlet Allocation Model
A generative statistical model used to identify hidden topics in a collection of text documents, in which each document is modeled as a mixture of topics and each topic as a distribution over words.
Level Balance
A situation in which each level of a factor appears with equal frequency across an experimental design, minimizing bias and maintaining fairness in the representation of conditions.
Level Overlap
The degree to which levels of a factor are paired with levels of other factors across experimental conditions, ensuring adequate combinations for robust analysis.
Likelihood
A measure of how well a statistical model explains the observed data, calculated as the probability of the data given specific parameter values in the model.
Linear Coding
The representation of categorical variable levels as numerical values on a linear scale, typically used to model linear relationships between a variable and an outcome.
Line Pricing
A pricing strategy in which a company sets uniform price points for a range of related products or services to simplify customer choices and encourage upselling within the product line.
Logit Model
A statistical model used to predict the probability of a categorical outcome, typically binary, by modeling the log-odds of the outcome as a linear function of predictor variables.
Logit Rule
A decision-making principle in discrete choice modeling, according to which the probability of choosing an option is proportional to the exponential of its utility relative to the sum of exponentials for all available options.
Log-Linear Coding
The practice in conjoint analysis of transforming price attributes using a logarithmic function. This helps ensure that the resulting part-worth utilities remain monotonic and do not exhibit reversals, thereby maintaining a consistent decreasing relationship with price.
Loss Function
A mathematical function used in optimization and machine learning to quantify the difference between predicted and actual values, used to guide model adjustments to minimize this error.
M
Machine Learning
A branch of artificial intelligence in which computers learn patterns and make predictions or decisions without being explicitly programmed, using data-driven algorithms and models.
Main Effect
The direct, independent impact of an individual factor or variable on the outcome in an experimental or statistical model, ignoring interactions with other factors.
Market Share
The percentage of total sales in a market attributed to a particular company, product, or brand, over a defined period of time indicating its competitive position within the market.
Maximum Difference Scaling (MaxDiff)
An approach that asks respondents to choose the most and least preferred items from a list, ranking attributes by importance. This is effective for distinguishing between highly similar preferences across attributes.
MCMC (Markov Chain Monte Carlo)
A computational algorithm that uses random sampling to approximate complex probability distributions and solve problems in Bayesian inference or statistical modeling.
MCMC Draws (Markov Chain Monte Carlo Draws)
Individual samples generated from a Markov chain Monte Carlo process, representing possible values from the target probability distribution to approximate its characteristics.
Menu-Based Choice
A conjoint-based method designed for markets in which products are chosen from a menu. An example is a fast-food restaurant where consumers can either create a meal by combining single items (e.g., a burger and a soft drink) or choose a pre-configured meal.
Metropolis Algorithm
A foundational Markov chain Monte Carlo method that generates samples from a probability distribution by accepting or rejecting proposed moves with a calculated acceptance probability, ensuring convergence to the target distribution over time.
Mixed Logit
A flexible choice model in conjoint analysis that accounts for random variation in individual preferences. It allows for a more realistic representation of consumer behavior by capturing heterogeneity in choice patterns.
Mixture of Normals
A statistical model that represents a probability distribution as a combination of multiple normal (Gaussian) distributions, often used to capture heterogeneity or account for subpopulations within a dataset.
Monotonicity Constraint
A restriction applied in modeling to ensure that the relationship between variables is in a consistent direction (increasing or decreasing) to align with theoretical or practical expectations.
Mother Logit
A term in hierarchical Bayesian modeling referring to the upper-level multinomial logit model that governs the distribution of individual-level preferences or utilities across a population.
Multinominal Logit
A basic model used in conjoint analysis to predict the probability of choosing each alternative. It assumes independence between choices, providing a straightforward approach to analyzing consumer preference data.
Multinominal Probit
A statistical model used for predicting outcomes with multiple categories, in which the choice probabilities are derived from a multivariate normal distribution of latent utilities, allowing for correlated errors across alternatives.
Multivariate Normal Distribution
A random sample following a distribution in which each variable follows a normal distribution and the variables collectively adhere to a specified mean vector and covariance matrix, capturing their relationships.
Multiverse Optimizer
An advanced optimization technique inspired by the dynamics of the multiverse theory, in which multiple candidate solutions evolve in parallel, exploring diverse regions of the solution space to find a global optimum efficiently.
N
Nash Equilibrium
A concept in game theory in which no player can unilaterally improve their outcome given the strategies of all other players.
Nested Logit
A discrete choice model that extends the standard logit model by grouping alternatives into nested structures, allowing for correlated choices within nests and greater flexibility in representing decision-making behavior.
Neuroscience
The scientific study of the nervous system, encompassing its structure, function, development, and influence on behavior and cognitive processes.
Newton and Raftery
A statistical method used in Bayesian analysis to calculate the marginal likelihood of a model and prove the convergence of Monte Carlo Markov chains.
None Option
An alternative included in choice-based models or surveys allowing respondents to opt out of selecting any of the presented options, reflecting a preference for “none of the above.” This can help capture real-world decision-making behavior by avoiding the noise introduced by forcing a decision.
O
Optimization
The process of finding the best solution or outcome from a set of possible choices, typically by maximizing or minimizing a specific objective function subject to constraints.
Orthogonal Design
A design that ensures that attributes in a conjoint study are statistically independent, allowing for a clear estimation of each attribute’s effect. This is essential for accurate data interpretation and reliable preference modeling.
Out-of-Sample Data
Data that are not used during the model training process, often employed for testing or validating the model’s predictive performance on unseen observations.
Outside Good
An option in a discrete model in which consumers choose not to purchase any of the available alternatives, effectively opting out of the market under consideration. This allows for the possibility that all presented options are less attractive than not purchasing at all, providing a more realistic representation of consumer behavior.
Overfitting
A modeling issue in which a model learns the training data too well, including noise and idiosyncratic patterns, resulting in poor generalization and reduced performance on new, unseen data.
P
Partial Profile Design
A method in conjoint analysis in which only a subset of attributes is presented in each choice task, reducing cognitive load on respondents while still enabling the estimation of preferences for all attributes.
Particle Swarm
A computational optimization technique inspired by the collective behavior of swarms of animals, in which a population of candidate solutions (particles) explores the solution space by adjusting their positions based on personal and group experience to find the optimal solution.
Part-Worth Utility
The value placed by a respondent on each attribute level in a conjoint study. These values provide insight into the strength of preferences and are used to calculate overall utility scores for each choice.
Perceptual Mapping
A visual technique used in marketing and research to display consumers’ perceptions of products, brands, or services along key dimensions, often revealing competitive positioning and market opportunities.
Personalized Pricing
A pricing strategy in which prices are tailored to individual customers on the basis of their preferences, purchasing behavior, and willingness to pay, often enabled by data analytics and machine learning.
Point Estimate
The aggregated part-worth utilities over the last n draws, with some draws omitted according to a given skip factor.
Posterior
A term used in Bayesian statistics to refer to the updated probability distribution of a parameter after incorporating observed data, reflecting the combination of prior beliefs and new evidence.
Predictive Posterior
A term used in Bayesian statistics for the distribution of future observations or unobserved data predicted using the posterior distribution of the model parameters, combining prior knowledge and observed evidence.
Preference Share
The estimated share of respondents preferring a specific product or option within a conjoint study. It is commonly used to forecast and evaluate competitive positioning. In simulations based on part-worth utility, it is a powerful measure to compare different scenarios. (see also Share of Choice)
Price Sensitivity
A term used in conjoint analysis to describe the degree to which consumers’ choice probability changes in response to price variations. Understanding price sensitivity helps in setting optimal pricing strategies and assessing consumer demand.
Prior
A term used in Bayesian statistics to describe the initial probability distribution representing beliefs or knowledge about a parameter before observing any data.
Prior Covariance
A measure used in Bayesian statistics to quantify the relationship between parameters in the prior distribution, indicating how changes in one parameter are expected to relate to changes in another before observing any data.
Probit Model
A type of regression model used for binary or ordinal dependent variables, in which the probability of an outcome is modeled using the cumulative distribution function of the normal distribution.
Product Acceptance
A simulation model for conjoint analysis results that does not consider the competitive context, in which each concept is evaluated against the threshold of the “outside good.” This technique is primarily used in new product development when competitors are not clearly defined. Product acceptance simulations are often calibrated using traditional five-point purchase intention scales.
Prohibitions
Rules in experimental design, including in conjoint analysis, that prevent certain combinations of attribute levels from appearing together in choice tasks, ensuring realistic or meaningful scenarios.
Promotion
Marketing activities aimed at increasing awareness, interest, and sales of a product or service, typically involving advertising, discounts, special offers, or other incentives.
Purchase Intent
A measure of a consumer’s likelihood or willingness to buy a product or service, often used as an indicator of potential demand or success in marketing and sales strategies.
Purchase Intention
See purchase Intent.
Q
R
Randomized Design
A design that presents conjoint scenarios in a random order to each respondent, minimizing order bias. This approach ensures more reliable data by reducing systematic response patterns.
Randomized First Choice
A simulation approach used in conjoint analysis in which preferences are modeled by incorporating randomness into utility values, allowing for probabilistic predictions of choice outcomes across a population. This is often described as “poor man’s draws” because hierarchical Bayesian draws provide a better framework for probabilistic predictions.
Randomized Parameter Logit
A discrete choice model that extends the standard logit model by allowing parameters to vary randomly across individuals, capturing heterogeneity in preferences within the population.
Random Regret Minimization
A behavioral choice model that assumes that individuals make decisions by minimizing anticipated regret from not choosing other available options, contrasting with utility-maximization approaches.
Random Utility Model
A framework in discrete choice theory that assumes that individuals choose the option that provides the highest utility, with utility consisting of a deterministic component and a random error term capturing unobserved factors.
Random Utility Theory
A theoretical framework that explains individual decision-making by assuming that the utility derived from each option has both a systematic component (observable factors) and a random component (unobservable influences), with choices based on the highest perceived utility.
Red Bus/Blue Bus Property
see Independence of Irrelevant Alternatives
Relative Importance
A measure of the contribution of an attribute to overall preference in conjoint analysis. It helps prioritize features that matter most to consumers, guiding product and marketing strategies (see Attribute Importance).
Response Error
Variability or inaccuracies in respondents’ answers during data collection, often arising from misunderstanding, fatigue, or random noise, potentially affecting the reliability of the results.
Revealed Preference Data
Observational data derived from actual consumer choices in real-world settings, reflecting preferences based on their purchasing behavior or decisions.
Revenue Optimization
The process of strategically adjusting pricing, inventory, or product offerings to maximize revenue, often leveraging data analysis, demand forecasting, and consumer behavior insights.
Reversal
A term used in preference or choice modeling to describe instances in which the predicted or observed ranking of options contradicts expected patterns, such as when a less preferred alternative is chosen over a more preferred one, potentially indicating inconsistencies or noise in the data.
Reversed Dual-Response None
A survey design approach in which respondents are first asked whether any of the options is worth buying and then which they would buy. The reversed approach provides insights into the strength of preference and reduces bias toward selecting a “none” option by reducing cognitive stress.
Risk Function
A function used in statistics and decision theory to quantify the expected loss associated with a decision or estimator, often used to evaluate and compare the performance of different models or strategies.
S
Sample Size
The number of observations or respondents included in a study or dataset. The sample size influences the precision, reliability, and generalizability of a statistical analysis.
Scale Factor
A parameter used in choice modeling to reflect the degree of randomness in respondents’ choices, influencing the relative importance of utility differences in determining probabilities.
Screening Task
A task that filters out irrelevant options by asking respondents to identify unacceptable or preferred choices early in a conjoint survey. This step simplifies the subsequent choice tasks and enhances data quality by focusing on relevant attributes.
Self-Explicated Approach
A method in preference modeling in which respondents directly rate the importance of attributes and levels, allowing for faster data collection and simpler analysis than indirect approaches like conjoint analysis.
Share Inflation
A phenomenon in choice modeling in which the predicted market share of an option is overestimated due to factors such as model assumptions, bias in data collection, or simplifications in the analysis.
Share of Choice
A term used to distinguish simulated conjoint analysis results from actual market shares. Although “preference share” is often incorrectly equated with market share in some publications, “share of choice” clarifies that these results represent modeled choices rather than real-world market performance.
Shrinkage
A statistical technique used to improve model estimates by pulling extreme values closer to the mean or a central value, often applied in hierarchical models to balance individual-level data with population-level trends.
Simulated Annealing
An optimization algorithm, inspired by the annealing process in metallurgy, that iteratively explores the solution space by allowing occasional uphill moves to escape local optima, gradually reducing randomness to converge on a global optimum.
Simulation
A term used in conjoint analysis to refer to predicting consumer behavior under various product scenarios, estimating preference share for different configurations. This aids in strategic decision-making by modeling potential outcomes based on consumer preferences.
Situational Choice Experiment
A behavioral model that accounts for the influence of specific situational factors—such as context, environment, or temporary conditions—on individuals’ decision-making processes, reflecting variations in preferences caused by external circumstances.
Stated Preference Data
Data collected from individuals by asking them about their preferences or the choices they would make in hypothetical scenarios, often used when real-world behavioral data are unavailable.
Stationary Distribution
A distribution of Markov chain states that remains unchanged as the system evolves, representing the long-term behavior of the chain regardless of its starting point.
Summed Pricing Design
A conjoint analysis approach in which the total price of an option is calculated as the sum of individual component prices, helping to isolate the effects of specific attributes on overall price perceptions and preferences.
T
TCA (Traditional Conjoint Analysis)
The classical method of preference measurement in which respondents evaluate a full-profile set of options, usually by rating or ranking them. This allows researchers to estimate the relative importance of attributes and preferences for attribute levels with simple monotone regression. A traditional conjoint analysis must have a limited number of attributes and levels; otherwise, the designs and number of choice sets grow too large.
Top n Simulation
A predictive technique in choice modeling that focuses on estimating the probability or likelihood of an option being among the top n choices instead of the single most preferred option.
Trade-off-model
Describes a situation where different goals or resources must be balanced against each other, as they cannot be optimized simultaneously. It highlights the compromises that need to be made when choosing between competing alternatives, such as cost versus quality or efficiency versus flexibility.
Truncated Normal Distribution
A random sample following a distribution in which each variable follows only one side of a normal distribution (either the left or right half of the distribution) and the variables collectively adhere to a specified mean vector and covariance matrix, capturing their relationships. Truncated normals are often used for estimating price parameters or other ordered attributes.
TURF (Total Unduplicated Reach of Frequency)
A method of evaluating the reach and frequency of attribute combinations, aiming to maximize appeal across consumer segments. It is used to optimize product lineups by identifying combinations that maximize total consumer reach.
U
Upper-Level Model
A term used in hierarchical modeling to describe the component that represents population-level parameters or group-level trends, influencing individual-level models and capturing shared patterns across subgroups.
Utility Balance
The concept of balancing the relative attractiveness of choices in a conjoint survey to avoid response bias. This ensures that no single option is overwhelmingly preferred, leading to more balanced and reliable data collection.
V
Volumetric Conjoint
A conjoint analysis approach that incorporates quantity or volume preferences into the choice tasks, enabling researchers to estimate not just what consumers prefer but also how much of aproduct they are likely to purchase.
Volumetric Conjoint
A conjoint analysis approach that incorporates quantity or volume preferences into the choice tasks, enabling researchers to estimate not just what consumers prefer but also how much of aproduct they are likely to purchase.
W
Willingness to Buy
A consumer’s behavioral intention to purchase a specific product at a given price. Willingness to buy is often analyzed alongside willingness to pay in conjoint analysis to derive valuations. Ideally, both measures should produce a consistent preference order between two options.
Willingness to Pay (WTP)
An estimate of the amount consumers are willing to spend for specific product features in a conjoint analysis. This provides valuable insights into pricing strategy by quantifying the value consumers assign to attributes.
Wishart Distribution
A probability distribution that describes the covariance matrix of multivariate normal random variables, often used in Bayesian statistics as a prior for estimating covariance structures.
X
Y
Z