Journal Impact Factor (JIF) is a product of Thomson Reuters devised by Eugene
Garfield, the founder of Institute for Science Information, now a part of Thomson
Reuters. It is a quantitative tool used for evaluating journals. It measures the frequency of an average article being cited in a journal in a given period of time. The impact factor for journal is calculated based on a three-year period. It is considered as the average number of times the published papers are cited upto two years after publication. It is used as a proxy for the relative importance of a journal within its field. The journals with higher impact factor are judged to be important than those with lower ones. The impact factors are calculated yearly for the journals indexed in Thomson Reuters’ Journal Citation Report. It has also become an important factor in the assessment of the academic performance of teachers in universities and colleges in India (Mishra, 2009). It can be obtained as the ratio of the total number of citations received by the papers published in the journal to the number of papers published in the journal. The impact factor can also be misused to evaluate the importance of an individual publication or an individual (Seglen, 1997). This does not work well, since a small number of publications are cited much more than the majority. The impact factor, however, averages over all articles and thus underestimates the citations of the most cited articles, while overestimating the number of citations of the majority of articles. In the first paper of the current issue, “Temporal Changes in the Parameters of Statistical Distribution of Journal Impact Factor”, the author, S K Mishra, discusses the statistical distribution of the journal impact factor. In the literature, the researchers have hypothesized various types of statistical distributions of JIF, such as negative exponential (Brookes, 1970), Poisson (Brown, 1980), lognormal (Matriccianni, 1991), and Weibull (Hurt and Budd, 1992). The characteristic of statistical distribution in this case is asymmetric and non-mesokurtic which is similar to that of log10(JIF), characterizing Pearson’s Type-IV distribution (Mishra, 2009). In this paper, the author estimates the parameters of Johnson SU distribution fitting to the log10(JIF) data for 10 years and studies the temporal variations in the estimated parameters. The author shows that although Burr-XII, Dagum and Johnson SU distributions fit better to the log10(JIF) than any other distribution, the parameters of Johnson SU distribution exhibit stability over samples, unlike those of Burr-XII and Dagum.
Demand is considered to be one of the important components of the inventory, as the inventory problem does not exist without it. The models to solve the inventory problem are based on the nature of the demand, which may be static or dynamic throughout the lifetime of the product. Static demand occurs rarely, as demand for products generally varies with certain factors like time, price, stock, etc. Similarly, the demand pattern for perishable products varies during their life cycle in the market. The perishable items, such as food stuff, human blood, fresh produce and meat have a maximum usable lifetime and become unusable after certain time.
Lifetime of such products is assumed to be a random variable with probability distribution represented by Gamma, Weibull, exponential or other distribution patterns. Maintaining the inventory of perishable items is a major concern in the supply chain of any business organization. In the next paper, “An EOQ Model with Pareto Distribution for Deterioration, Trapezoidal Type Demand and Backlogging Under Trade Credit Policy”, the authors, Narayan Singh, Bindu Vaish and
S R Singh propose an Economic Order Quantity (EOQ) inventory model where it is assumed that the lifetime of the commodity is random and the demand rate is trapezoidal, i.e., the demand rate is a piecewise linear function. This demand rate is used when the stock is available as well as during shortage period. Shortage is considered to be completely backlogged, and the backlogging rate is trapezoidal. They propose an optimal inventory replenishment policy for this model under the condition of permissible delay. The restricted assumption of a permissible delay is relaxed at the end of the credit period and the retailer makes a partial payment of the total purchasing cost and pays the balance by taking a loan from the bank.
They also discuss how the optimal ordering quantity and optimal time can be obtained by minimizing the total cost.
Linear regression implies that the relation of the dependent to the independent variables is a linear function of some parameters. Though it is a widely used optimal technique, it has certain problems and pitfalls. Least squares regression performs badly when some points in the data have excessively large or small values, as compared to the rest of the data. It also suffers from the drawback that the relation is not linear in most of the cases. In the real world, relationships tend to be more complicated than simple lines, and hence the linear methods fail to do a good job in predictions. A good solution to the nonlinearity problem is to directly apply the kernel method or kernelized ridge regression with an appropriate choice of a nonlinear kernel function. The ridge regression technique, along with kernel function, generally avoids overfitting and underfitting, since they are not restricted to simple linear model. In the final paper, “A Survey of Ridge Regression for Improvement Over Ordinary Least Squares”, the author, Rajeshwar Singh, discusses the advantages of using ridge regression over ordinary least squares in the situation of multicollinearity, which occurs when the variables are highly correlated. In the presence of multicollinearity, the design matrix becomes nearly singular and hence X is not of full rank. The author discusses ridge regression as a solution to the problem of multicollinearity. Hoerl (1962) suggested adding a small positive increment k to the diagonal elements of before inverting it; this increment is called the biasing parameter. This estimate of b is called the ridge estimate. In this paper, the author discusses some of the properties of ridge regression. He also discusses the relation of ridge estimator with other estimates given by Farebrother (1975), Goldstein and Smith (1974), and Bayesian estimation and finds that ridge regression is a more advanced solution to multicollinearity, but reduces the MSE with more reliable estimates of β.
-- Sashikala Banoor
Consulting Editor