4 resultados para conditional least squares
em University of Queensland eSpace - Australia
Resumo:
Objective: This study examined a sample of patients in Victoria, Australia, to identify factors in selection for conditional release from an initial hospitalization that occurred within 30 days of entry into the mental health system. Methods: Data were from the Victorian Psychiatric Case Register. All patients first hospitalized and conditionally released between 1990 and 2000 were identified (N = 8,879), and three comparison groups were created. Two groups were hospitalized within 30 days of entering the system: those who were given conditional release and those who were not. A third group was conditionally released from a hospitalization that occurred after or extended beyond 30 days after system entry. Logistic regression identified characteristics that distinguished the first group. Ordinary least-squares regression was used to evaluate the contribution of conditional release early in treatment to reducing inpatient episodes, inpatient days, days per episode, and inpatient days per 30 days in the system. Results: Conditional release early in treatment was used for 11 percent of the sample, or more than a third of those who were eligible for this intervention. Factors significantly associated with selection for early conditional release were those related to a better prognosis ( initial hospitalization at a later age and having greater than an 11th grade education), a lower likelihood of a diagnosis of dementia or schizophrenia, involuntary status at first inpatient admission, and greater community involvement ( being employed and being married). When the analyses controlled for these factors, use of conditional release early in treatment was significantly associated with a reduction in use of subsequent inpatient care.
Resumo:
The expectation-maximization (EM) algorithm has been of considerable interest in recent years as the basis for various algorithms in application areas of neural networks such as pattern recognition. However, there exists some misconceptions concerning its application to neural networks. In this paper, we clarify these misconceptions and consider how the EM algorithm can be adopted to train multilayer perceptron (MLP) and mixture of experts (ME) networks in applications to multiclass classification. We identify some situations where the application of the EM algorithm to train MLP networks may be of limited value and discuss some ways of handling the difficulties. For ME networks, it is reported in the literature that networks trained by the EM algorithm using iteratively reweighted least squares (IRLS) algorithm in the inner loop of the M-step, often performed poorly in multiclass classification. However, we found that the convergence of the IRLS algorithm is stable and that the log likelihood is monotonic increasing when a learning rate smaller than one is adopted. Also, we propose the use of an expectation-conditional maximization (ECM) algorithm to train ME networks. Its performance is demonstrated to be superior to the IRLS algorithm on some simulated and real data sets.