Logistic regression is a form of regression which is used when the dependent
is a dichotomy and the independents are of any type. Logistic regression can
be used to predict a dependent variable on the basis of independents; to determine the percentage of variance in the dependent variable explained by the independents; to rank the relative importance of independents; to assess interaction effects; and to understand the impact of covariate control variables. Logistic regression applies maximum likelihood estimation after transforming the dependent into a logit variable (or the natural log of the odds of the dependent occurring or not). In this way, logistic regression estimates the probability of a certain event occurring. Logistic regression analysis is a statistical technique that points out the differences between two or more groups based on several characteristics. It is often used to determine which customers are likely to buy a company’s product, to decide whether a bank should offer a loan to a new company, or to identify patients who may be at high risk for medical problems. R2-like measures are an attempt to measure the strength of association. For small samples, for instance, an R2-like measure might be high when goodness of fit is unacceptable by chi-square or some other tests. Pseudo-R2 is a Aldrich and Nelson’s coefficient which serves as an analog to the squared contingency coefficient, with an interpretation like R2. Its maximum is less than 1. It may be used in either dichotomous or multinomial logistic regression. In the paper, “The Variability of Pseudo R2s in Logistic Regression Models”, the authors, Wade Rose and Inder Jit Singh Mann, discuss the tendency of various Pseudo R2s to have different absolute values, different percentages of change from one model to another. They also discuss the variability of Pseudo R2 and the importance of knowing how it is to be used, by providing suitable examples for comparing their strength, weakness and applicability.
The classical Newton method and Fixed Newton (FN) method have a wide area of applications in science and engineering, where nonlinear type of problems are encountered and convergence of values to find a solution is an issue. Hence, a large number of iterations have to be carried out to arrive at a solution. Time to arrive at a solution and accuracy of results are important parameters to be considered to attempt any nonlinear problem in any field of application. The paper, “A New Multi-Step Fixed Newton’s Method for Solving Large-Scale Systems of Nonlinear Equations”, by Mohammed Waziri Yusuf, Ibrahim Saidu and Aisha Haliru, designs and implements a simple new approach via multi-step method—Multi-step Fixed Newton’s (MFN) method—to solve large-scale nonlinear equations. The proposed method is capable of significantly reducing the execution time (CPU time), as compared to classical Newton’s method and FN method, while still preserving the quality and accuracy of the numerical results. The paper finds that the storage space required to store the Jacobian matrix for any n number of iterations is less in MFN method than in conventional Newton’s method and FN method. This will help to find a faster solution for any nonlinear problem by saving on computer resources.
The 3-dimensional interpolation array methods are already in use to generate 3-dimensional objects as done in computer graphics and other related fields. There are large-scale industrial applications for contoured surfaces and other objects in automotive, aerospace and other industries. The polynomials are used to represent 3-dimensional surfaces. The paper, “A New 3-Dimensional Polynomial Interpolation Method: An Algorithmic Approach”, by Amitava Chatterjee and Rupak Bhattacharyya, introduces a novel interpolation operator along with its properties. The number of nodes required in the proposed interpolation method is less and the method is also handy and compact compared to other methods. The authors have developed the 3D polynomial interpolation method with mathematical formulation, but high degree of knowledge of mathematical analysis is not needed for the development of the method. They have also devised an algorithm to interpolate polynomial and furnished a numerical example to support the working of algorithm. They have also mentioned some practical applications of their new method.
The Black-Merton-Scholes model is a mathematical description of financial markets and derivative investment instruments and is widely used in the pricing of European-style options. Black-Scholes pricing is widely used in practice, as it is easy to calculate and explicitly models the relationship among all the variables. It is an useful approximation, particularly when analyzing the directionality of price movements when crossing critical points. Despite the existence of the volatility smile (and the violation of all the other assumptions of the Black-Scholes model), the Black-Scholes PDE and Black-Scholes formula are still used extensively in practice. The binomial options pricing model approach is widely used as it is able to handle a variety of conditions for which other models cannot easily be applied. The model is more accurate, particularly for longer-dated options on securities with dividend payments. In the paper, “Prices Expansion in the Wishart Model”, the authors, Pierre Gauthier and Dylan Possamaï, consider the Wishart model and derive various results on the integrated Wishart process. The reduction in the computational effort needed to compute call price is one of the primary objectives of this paper.
-- Sashikala Banoor
Consulting Editor