Introduction
Regression analysis consists of categorizing or recognizing the correlation amongst a dependent variable as well as one or extra independent variable. A model of the correlation is theorized, and approximates of the parameter standards are utilized to improve a projected regression equation. Numerous tests are then engaged to check or govern if the model is adequate. If the model is considered adequate or acceptable, the estimated regression equation can be applied to envisage the value of the dependent variables assumed values for the independent variable (Cohen et al., 2014).
Focus on intuition rather than formulas to arrive at the regression model.
To arrive at the regression model with the focus on the intuition that is the R-squared intuition, the actual R-squared equation is considered as:
R-Squared = 1 - (Explained or total variance)
R-squared is known as the statistical measure that denotes the section of the variance for a dependent variable that is elucidated by an independent variable. In capitalizing, R-squared is normally deliberated the percentage of a fund or security's activities that can be clarified by activities in a point of reference index.
Compare/Contrast Direct and Inverse Relationships and Examples.
A direct relationship is relational in the sense that as soon as one variable rises, so does the other while Inverse relationships operates in a different way since if you increase x, the significance of y reduces. Arithmetically, this type of relation has the form: y = k / x, where by k is constant. To conclude, in direct relationships, a rise in x results to a congruently sized upsurge in y, and a reduction has the reverse effect. This creates a straight-line graph. In inverse relationships, increasing x results to a consistent reduction in y, and a fall in x results in an increase in y. This creates a curving graph where the drop is fast at first, but gets sluggish for greater or higher values of x (Barlow et al., 2006).
Association and Causation in terms of relationships. In particular, does correlation imply causation?
To begin with, it is significant to know what a correlation means as well as what a causation means. A correlation is a joint relation amongst binary variables. Causation is the relation amongst cause and effect. Hence, when a cause leads in an effect, that's a causation. In other arguments, the correlation amongst two occasions or variables basically shows that a relationship occurs, whereas causation is more exact and states that one occasion essentially causes the other (Wright., 1921). Therefore, when argued that correlation does not imply causation, it means just that for the reason that an individual have the ability to perceive an association between two variables, it does not certainly state that one causes the other. Obviously, it may be the case that one occasion causes the other, however we cannot distinguish that by considering at the correlation only. Additional studies would be essential earlier that decision can be reached.
Spurious Correlation and Examples
In statistics, a spurious connection is an arithmetical association in which two or more variables are not causally connected to each other, however, it might be incorrectly concluded that they are, due to either chance or the existence of a certain third or invisible factors discussed to as confounding factors (Simon., 1954). An example of a spurious correlation can be perceived by probing a town's ice cream trade. These sales are at the peak time when the degree of drownings in town's swimming pools is also at the peak. To contend that ice cream trade causes drowning, or vice versa, would be to suggest a spurious correlation amongst the two. In realism, a heat wave might have affected both. The heat wave is a sample of an invisible variable, also identified as a confounding variable (Simon., 1954).
Measuring Correlation - Correlation Coefficient R and Its Interpretation
In statistics, the correlation coefficient r measures the power and route or bearing of a linear connection amongst two variables on a scatterplot. The value of r is constantly between positive 1 and negative 1. To interpret its value, realize which of the values your connection r is next or closer to (Taylor., 1990).
Interpretation of Coefficient of Determination
The coefficient of determination symbolized by R2 is a main productivity of regression analysis. It is interpreted as the quantity of the variance in the dependent variable that is expectable from the independent variable. The coefficient of determination is the square of the relationship (r) amongst foretold y scores as well as real y scores; therefore, it arrays from zero to one. With linear regression, the coefficient of determination is likewise the same with the square of the correlation between x and y scores (Ozer., 1985).
Use of a Regression Model for Forecasting and Prediction
Regression analysis is broadly applied for extrapolation and foretelling, where its utility has significant connections with the sector of machine learning. Regression analysis is correspondingly utilized to recognize which, among the independent variables relate to the dependent variable, and to sightsee the systems of these connections. In constrained situations, regression analysis can be utilized to deduce causal relations among the independent and dependent variables. Though, this can result to delusions or false relations, thus cautiousness is prudent .
Reference
Cohen, P., West, S. G., & Aiken, L. S. (2014). Applied multiple regression/correlation analysis for the behavioral sciences. Psychology Press.
Montgomery, D. C., Johnson, L. A., & Gardiner, J. S. (1990). Forecasting and time series analysis (p. 151). New York etc.: McGraw-Hill.
Ozer, D. J. (1985). Correlation and the coefficient of determination. Psychological Bulletin, 97(2), 307.
Schreiber, J. B., Nora, A., Stage, F. K., Barlow, E. A., & King, J. (2006). Reporting structural equation modeling and confirmatory factor analysis results: A review. The Journal of educational research, 99(6), 323-338.
Simon, H. A. (1954). Spurious correlation: A causal interpretation. Journal of the American statistical Association, 49(267), 467-479.
Taylor, R. (1990). Interpretation of the correlation coefficient: a basic review. Journal of diagnostic medical sonography, 6(1), 35-39.Wright, S. (1921). Correlation and causation. Journal of agricultural research, 20(7), 557-585.
Cite this page
How Regression Modelling Can Be Used to Understand the Association Between Two Variables - Essay Sample. (2022, Nov 06). Retrieved from https://proessays.net/essays/how-regression-modelling-can-be-used-to-understand-the-association-between-two-variables-essay-sample
If you are the original author of this essay and no longer wish to have it published on the ProEssays website, please click below to request its removal:
- Text Analysis With the Help of Structural Diagram Exercise
- Doing Literature Review Paper Example
- Evaluation Essay on Amigos Limited
- Quantitative Research Design on Anxiety in Children Paper Example
- Research Paper on Manulife Financial
- SWOT Analysis of Macy's: Strategies for Turnaround
- Paper Example on Technical Support: Making Technical Issues Easier