edcf_MailingList
* Required
Email address
*
Your email
NAME and SURNAME
*
Your answer
What is multivariate analysis
1 point
it is the study of real phenomena considering independent experimental measurements
it is the study of real phenomena using a collection of experimental measurements
it is the study of real phenomena considering just one experimental measurements
all true
Multivariate analysis goals
1 point
Description of complex phenomena such as food quality. The weight of an apple
Description of simple phenomenon as food quality (eg. Fruit) Sugars, acids, pH, free ethylene, ...
Description of simple phenomenon as the weight of an apple
Description of complex phenomena such as food quality (eg. Fruit) sugars, acids, pH, free ethylene, ...
How to calculate the elements of the covariance matrix?
1 point
Using correlation coefficients (c) and averages (V) of the variables the equation is: Eik element = Cik * Vi * Vk
Using correlation coefficients (c) and variances (V) of the variables the equation is: Eik element = Cik * Vi ^ Vk
Using averages (c) and variances (V) of the variables the equation is: Eik element = Cik * Vi * Vk
Using correlation coefficients (c) and variances (V) of the variables, the equation is: Eik element = Cik * Vi * Vk
Do the data should be scaled?
1 point
YES to eliminate the numerical differences between variables
YES to reduce the excess of information in a variable
all true
YES to eliminate the differences of measurement units
Do The data should be scaled?
1 point
NO to reduce the excess of information in a variable
NO to reduce the excess of information in a variable
all false
YES to eliminate the differences of measurement units
A gas chromatogram is an selective observable
1 point
No The intensity of a line does not depend on the concentration of more compounds with similar elution times
YES The intensity of a line can result from the concentration of more compounds with similar elution times
No The intensity of a line can result from the concentration of more compounds with similar elution times
all false
Is an array of chemical sensors nonselective observable?
1 point
NO the sensor response is given by the combination of several substances depending on their concentration and their affinity with the sensor itself
YES The response of a sensor is given by the combination of several substances depending on their concentration and their affinity with the sensor itself
The response of a sensor is not given by the combination of several substances
YES The response of a sensor is not given by the combination of several substances
Two random variables X and Y are uncorrelated if:
1 point
E [X * Y] = E [X] + E [Y]
E [X * Y] = E [X]  E [Y]
E [X * Y] = E [X] ^ E [Y]
E [X * Y] = E [X] * E [Y]
Two random variables X and Y are independent if:
1 point
P [X * Y] = P [X] * P [Y]
P [X * Y] = P [X] + P [Y]
P [X * Y] = P [X]  P [Y]
P [X * Y] = P [X] ^ P [Y]
Effect of overfitting
1 point
The model is no specialized in describing the calibration data and is not able to resolve the problem of classification in validation procedure
The model is too specialized in describing the validation data and is not able to resolve the problem of classification in calibration procedure
The model is too specialized in describing the calibration data and is not able to resolve the problem of classification in validation procedure
all false
In Multiple Linear Regression do we use the calibration and validation?
1 point
NO it is not necessary to do Calibration because he matrix B is always known
YES Validation: measuring the observations in Y matrix related to the X matrix variables we can determine the matrix B. Use: knowing the matrix B we can estimate the values in matrix X by measuring the observations in matrix Y
NO in Calibration just we can measure the observations in X matrix related to the B matrix variables we can determine the matrix Y.
YES Calibration: measuring the observations in Y matrix related to the X matrix variables we can determine the matrix B. Use: knowing the matrix B we can estimate the values in matrix X by measuring the observations in matrix Y
The normal distribution preserves its significance in multivariate statistics: the mean and variance become
1 point
The average a number; The variance a matrix (covariance matrix)
The average a vector; The variance a matrix (covariance matrix)
The average a vector; The variance a vector (covariance vector)
All false
Loadings and scores are:
1 point
The score are the projection of the original axes in the subspace identified by principal components. The Loadings are the new coordinates of the vectors corresponding to the observations (the rows of the matrix x) in the base of the principal components
all true
The loadings are the projection of the original axes in the subspace identified by principal components. The Scores are the new coordinates of the vectors corresponding to the observations (the rows of the matrix x) in the base of the principal components
all false
Univariate analysis is:
1 point
The distribution of a multiple variable, including its central tendency (including the mean, median, and mode) and dispersion
The distribution of a single variable, including its dispersion (including the mean, median, and mode) and central tendency
All true
The distribution of a single variable, including its central tendency (including the mean, median, and mode) and dispersion
What are the major differentiating point between univariate and bivariate analysis?
1 point
The analysis of the relationship between the two variables.
Bivariate analysis is a simple (two variable) special case of multivariate analysis (where multiple relations between multiple variables are examined simultaneously)
All true
Bivariate analysis goes beyond simply descriptive
Why Multivariate?
1 point
Typically more than one measurement is taken on a given experimental unit
Need to consider all the measurements together so that one can understand how they are related
Need to consider all the measurements together so that one can extract essential structure
All true
What Does MVA Software generate?
1 point
The MVA software generates one main type of plots to represent the data the Score plots
The MVA software generates one main type of plots to represent the data the loadings plots
All false
The MVA software generates two main types of plots to represent the data: Score plots and Loadings plots
The Score plot:
1 point
Each component has a set of loadings or weights, which express the projection of each original variable onto each new component.
The Score plot shows all the original data points (variables) in a new set of coordinates or components .Each score is the value of that data point on one of the new component dimensions
The Score plot shows all the original data points (observations) in a new set of coordinates or components .Each score is the value of that data point on one of the new component dimensions:
The Score plot shows all the original data points (observations) in the same set of coordinates or components .Each score is the value of that data point on one of the component dimensions
Loadings plots:
1 point
The loading plot shows all the original data points (observations) in a new set of coordinates or components .Each score is the value of that data point on one of the new component dimensions:
The loading plot shows all the original data points (variables) in a new set of coordinates or components .Each score is the value of that data point on one of the new component dimensions:
All false
This is the equivalent to the score plot, only from the point of view of the original variables. Each component has a set of loadings or weights, which express the projection of each original variable onto each new component.
Multivariate Analysis: Benefits
1 point
The potential benefit is to explore the interrelationships between different process variables.
This is a lowcost way to investigate options.
Some important parameters, like final product quality, cannot be measured in real time. They can, however, be inferred from other variables that are measured online.
All true
Multiple Regression
1 point
it is the extension of simple linear regression to more than one (continuous/ordinal) independent variables
All true
We use least squares in exactly the same way to obtain estimates of the regression coefficients
We have a plane in multidimensional space
Are the variables always independent?
1 point
Usually to variables are always 100% correlated (c=1) and never uncorrelated (c=0)
Usually to variables are not 100% correlated (c=0) or not at all correlated (c=1) there are always a partial correlation between variables
Usually to variables are not 100% correlated (c=1) or not at all correlated (c=0) there are always a partial correlation between variables
All true
Multivariate data are shown as:
1 point
The multivariate data are shown as a matrix
The multivariate data are shown as a vector
The multivariate data are shown as a scalar
The multivariate data are shown as variation
Observations space
1 point
Each multivariate measurement is represented by a vector in a space to N dimensions. N is equal to the size of the vector that expresses the observation
The statistical distribution of points (vectors) defines the properties of the entire data set.
For each multivariate data we can define a PDF multivariate.
All true
Hypothesis of pattern recognition
1 point
Similar samples are represented by closest points then mutual relation between distance and similarity between samples
Similar samples are represented by distant points then mutual relation between distance and similarity between samples
Similar samples are represented by closest points then mutual relation between distance and similarity between variables
All false
The colinearity is expressed by:
1 point
The Correlation matrix
The covariance matrix.
The PCA
All true
In case of colinearity:
1 point
the nondiagonal terms of the covariance matrix are nonzero.
the nondiagonal terms of the covariance matrix are zero.
the diagonal terms of the covariance matrix are zero.
the diagonal terms of the covariance matrix are nonzero.
What does it means remove the colinearity?
1 point
it means manipulating the correlation matrix in diagonal form by introducing new latent variables. The principal component analysis technique allows, among other things, to obtain this result
it means manipulating the covariance matrix in diagonal form by introducing new latent variables. The principal component analysis technique doesn t allows to obtain this result
it means manipulating the covariance matrix in diagonal form by introducing new latent variables. The principal component analysis technique allows, among other things, to obtain this result
all false
Does PLSDA have overfitting?
1 point
No but the number of latent variables must be optimized in a crossvalidation process.
Yes. The number of latent variables must not be optimized in a crossvalidation process.
Yes. The number of latent variables must be optimized in a calibration process.
Yes. The number of latent variables must be optimized in a crossvalidation process.
What is the overfitting problem?
1 point
The overfitting is given by the fact that the latent variable k is obtained by fitting the subspace of dimension k + 1. The latent variables are orthogonal to each other, there is no limit to the possibility of fitting the data in calibration.
The overfitting is given by the fact that the latent variable k is obtained by fitting the subspace of dimension k + 1. The latent variables are not orthogonal to each other, there is no limit to the possibility of fitting the data in calibration.
The overfitting is given by the fact that the latent variable k is obtained by fitting the subspace of dimension k  1. The latent variables are not orthogonal to each other, there is no limit to the possibility of fitting the data in calibration.
All false
What does the crossvalidation mean?
1 point
The crossvalidation sets the number of latent variables accuracy estimated on the validation set. Usually this value is larger than the error obtained from the model on the calibration data
The crosscalibration sets the number of latent variables accuracy estimated on the validation set. Usually this value is larger than the error obtained from the model on the validation data
The crossvalidation sets the number of latent variables accuracy that can not be estimated on the calibration data
All false
How to quantify the crossvalidation errors?
1 point
The crossvalidation errors are quantified by one variable: RMSEC Root Mean Square Error in Calibration
The crossvalidation errors are quantified by one variables: RMSECV Root Mean Square Error of Calibration in Validation
All false
The crossvalidation errors are quantified by two variables: RMSEC Root Mean Square Error in Calibration, RMSECV Root Mean Square Error of Calibration in Validation
In PLS algorithm
1 point
The X matrix is decomposed into principal components and principal components of Y are rotated in the direction of maximum correlation to the principal components of X
All true
The Y matrix is decomposed into principal components and principal components of X are rotated in the direction of maximum correlation to the principal components of Y
All false
Does PLS have latent variables?
1 point
PLS has latent variables similar to the principal components maximizing the variance of both Y and X
PLS has latent variables similar to the principal components maximizing the variance of X
PLS has latent variables similar to the principal components maximizing the variance of Y
All false
A copy of your responses will be emailed to the address you provided.
Submit
Page 1 of 1
Never submit passwords through Google Forms.
reCAPTCHA
Privacy
Terms
This content is neither created nor endorsed by Google.
Report Abuse

Terms of Service
Forms