997 resultados para Reverse logistic network


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The mortality rate of older patients with intertrochanteric fractures has been increasing with the aging of populations in China. The purpose of this study was: 1) to develop an artificial neural network (ANN) using clinical information to predict the 1-year mortality of elderly patients with intertrochanteric fractures, and 2) to compare the ANN's predictive ability with that of logistic regression models. The ANN model was tested against actual outcomes of an intertrochanteric femoral fracture database in China. The ANN model was generated with eight clinical inputs and a single output. ANN's performance was compared with a logistic regression model created with the same inputs in terms of accuracy, sensitivity, specificity, and discriminability. The study population was composed of 2150 patients (679 males and 1471 females): 1432 in the training group and 718 new patients in the testing group. The ANN model that had eight neurons in the hidden layer had the highest accuracies among the four ANN models: 92.46 and 85.79% in both training and testing datasets, respectively. The areas under the receiver operating characteristic curves of the automatically selected ANN model for both datasets were 0.901 (95%CI=0.814-0.988) and 0.869 (95%CI=0.748-0.990), higher than the 0.745 (95%CI=0.612-0.879) and 0.728 (95%CI=0.595-0.862) of the logistic regression model. The ANN model can be used for predicting 1-year mortality in elderly patients with intertrochanteric fractures. It outperformed a logistic regression on multiple performance measures when given the same variables.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the two-level network design problem with intermediate facilities. This problem consists of designing a minimum cost network respecting some requirements, usually described in terms of the network topology or in terms of a desired flow of commodities between source and destination vertices. Each selected link must receive one of two types of edge facilities and the connection of different edge facilities requires a costly and capacitated vertex facility. We propose a hybrid decomposition approach which heuristically obtains tentative solutions for the vertex facilities number and location and use these solutions to limit the computational burden of a branch-and-cut algorithm. We test our method on instances of the power system secondary distribution network design problem. The results show that the method is efficient both in terms of solution quality and computational times. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To evaluate perinatal factors associated with early neonatal death in preterm infants with birth weights (BW) of 400-1,500 g.Methods: A multicenter prospective cohort study of all infants with BW of 400-1,500 g and 23-33 weeks of gestational age (GA), without malformations, who were born alive at eight public university tertiary hospitals in Brazil between June of 2004 and May of 2005. Infants who died within their first 6 days of life were compared with those who did not regarding maternal and neonatal characteristics and morbidity during the first 72 hours of life. Variables associated with the early deaths were identified by stepwise logistic regression.Results: A total of 579 live births met the inclusion criteria. Early deaths occurred in 92 (16%) cases, varying between centers from 5 to 31%, and these differences persisted after controlling for newborn illness severity and mortality risk score (SNAPPE-II). According to the multivariate analysis, the following factors were associated with early intrahospital neonatal deaths: gestational age of 23-27 weeks (odds ratio - OR = 5.0; 95%CI 2.7-9.4), absence of maternal hypertension (OR = 1.9; 95%CI 1.0-3.7), 5th minute Apgar 0-6 (OR = 2.8; 95%CI 1.4-5.4), presence of respiratory distress syndrome (OR = 3.1; 95%CI 1.4-6.6), and network center of birth.Conclusion: Important perinatal factors that are associated with early neonatal deaths in very low birth weight preterm infants can be modified by interventions such as improving fetal vitality at birth and reducing the incidence and severity of respiratory distress syndrome. The heterogeneity of early neonatal rates across the different centers studied indicates that best clinical practices should be identified and disseminated throughout the country.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: To integrate data from two-dimensional echocardiography (2D ECHO), three-dimensional echocardiography (3D ECHO), and tissue Doppler imaging (TDI) for prediction of left ventricular (LV) reverse remodeling (LVRR) after cardiac resynchronization therapy (CRT). It was also compared the evaluation of cardiac dyssynchrony by TDI and 3D ECHO. Methods: Twenty-four consecutive patients with heart failure, sinus rhythm, QRS = 120 msec, functional class III or IV and LV ejection fraction (LVEF) = 0.35 underwent CRT. 2D ECHO, 3D ECHO with systolic dyssynchrony index (SDI) analysis, and TDI were performed before, 3 and 6 months after CRT. Cardiac dyssynchrony analyses by TDI and SDI were compared with the Pearson's correlation test. Before CRT, a univariate analysis of baseline characteristics was performed for the construction of a logistic regression model to identify the best predictors of LVRR. Results: After 3 months of CRT, there was a moderate correlation between TDI and SDI (r = 0.52). At other time points, there was no strong correlation. Nine of twenty-four (38%) patients presented with LVRR 6 months after CRT. After logistic regression analysis, SDI (SDI > 11%) was the only independent factor in the prediction of LVRR 6 months of CRT (sensitivity = 0.89 and specificity = 0.73). After construction of receiver operator characteristic (ROC) curves, an equation was established to predict LVRR: LVRR =-0.4LVDD (mm) + 0.5LVEF (%) + 1.1SDI (%), with responders presenting values >0 (sensitivity = 0.67 and specificity = 0.87). Conclusions: In this study, there was no strong correlation between TDI and SDI. An equation is proposed for the prediction of LVRR after CRT. Although larger trials are needed to validate these findings, this equation may be useful to candidates for CRT. (Echocardiography 2012;29:678-687)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background Smear negative pulmonary tuberculosis (SNPT) accounts for 30% of pulmonary tuberculosis cases reported yearly in Brazil. This study aimed to develop a prediction model for SNPT for outpatients in areas with scarce resources. Methods The study enrolled 551 patients with clinical-radiological suspicion of SNPT, in Rio de Janeiro, Brazil. The original data was divided into two equivalent samples for generation and validation of the prediction models. Symptoms, physical signs and chest X-rays were used for constructing logistic regression and classification and regression tree models. From the logistic regression, we generated a clinical and radiological prediction score. The area under the receiver operator characteristic curve, sensitivity, and specificity were used to evaluate the model's performance in both generation and validation samples. Results It was possible to generate predictive models for SNPT with sensitivity ranging from 64% to 71% and specificity ranging from 58% to 76%. Conclusion The results suggest that those models might be useful as screening tools for estimating the risk of SNPT, optimizing the utilization of more expensive tests, and avoiding costs of unnecessary anti-tuberculosis treatment. Those models might be cost-effective tools in a health care network with hierarchical distribution of scarce resources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For centuries the science of pharmacognosy has dominated rational drug development until it was gradually substituted by target-based drug discovery in the last fifty years. Pharmacognosy stems from the different systems of traditional herbal medicine and its "reverse pharmacology" approach has led to the discovery of numerous pharmacologically active molecules and drug leads for humankind. But do botanical drugs also provide effective mixtures? Nature has evolved distinct strategies to modulate biological processes, either by selectively targeting biological macromolecules or by creating molecular promiscuity or polypharmacology (one molecule binds to different targets). Widely claimed to be superior over monosubstances, mixtures of bioactive compounds in botanical drugs allegedly exert synergistic therapeutic effects. Despite evolutionary clues to molecular synergism in nature, sound experimental data are still widely lacking to support this assumption. In this short review, the emerging concept of network pharmacology is highlighted, and the importance of studying ligand-target networks for botanical drugs is emphasized. Furthermore, problems associated with studying mixtures of molecules with distinctly different pharmacodynamic properties are addressed. It is concluded that a better understanding of the polypharmacology and potential network pharmacology of botanical drugs is fundamental in the ongoing rationalization of phytotherapy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background The accumulation of mutations after long-lasting exposure to a failing combination antiretroviral therapy (cART) is problematic and severely reduces the options for further successful treatments. Methods We studied patients from the Swiss HIV Cohort Study who failed cART with nucleoside reverse transcriptase inhibitors (NRTIs) and either a ritonavir-boosted PI (PI/r) or a non-nucleoside reverse transcriptase inhibitor (NNRTI). The loss of genotypic activity <3, 3–6, >6 months after virological failure was analyzed with Stanford algorithm. Risk factors associated with early emergence of drug resistance mutations (<6 months after failure) were identified with multivariable logistic regression. Results Ninety-nine genotypic resistance tests from PI/r-treated and 129 from NNRTI-treated patients were analyzed. The risk of losing the activity of ≥1 NRTIs was lower among PI/r- compared to NNRTI-treated individuals <3, 3–6, and >6 months after failure: 8.8% vs. 38.2% (p = 0.009), 7.1% vs. 46.9% (p<0.001) and 18.9% vs. 60.9% (p<0.001). The percentages of patients who have lost PI/r activity were 2.9%, 3.6% and 5.4% <3, 3–6, >6 months after failure compared to 41.2%, 49.0% and 63.0% of those who have lost NNRTI activity (all p<0.001). The risk to accumulate an early NRTI mutation was strongly associated with NNRTI-containing cART (adjusted odds ratio: 13.3 (95% CI: 4.1–42.8), p<0.001). Conclusions The loss of activity of PIs and NRTIs was low among patients treated with PI/r, even after long-lasting exposure to a failing cART. Thus, more options remain for second-line therapy. This finding is potentially of high relevance, in particular for settings with poor or lacking virological monitoring.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using the relational dyad as unit of analysis this study examines the effects of perceived influence and friendship ties on the formation and maintenance of cooperative relationships between corporate top executives. Specifically, it is argued that perceived influence as well as friendship ties between any two managers will enhance the likelihood that these managers collaborate with each other. Additionally, a negative interaction effect between perceived influence and friendship on cooperation is proposed. The empirical analyses draw on network data that have been collected among all members of the top two organizational levels for the strategy-making process in two multinational firms headquartered in Germany. Applying logistic regression based on QAP the empirical results support our hypotheses on the direct effects between perceived influence, friendship ties, and cooperative relationships in both companies. In addition, we find at least partial support for our assumption that perceived influence and friendship interact negatively with respect to their effect on cooperation. Seemingly, perceived influence is partially substituted by managerial friendship ties.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

QUESTIONS UNDER STUDY: Patient characteristics and risk factors for death of Swiss trauma patients in the Trauma Audit and Research Network (TARN). METHODS: Descriptive analysis of trauma patients (≥16 years) admitted to a level I trauma centre in Switzerland (September 1, 2009 to August 31, 2010) and entered into TARN. Multivariable logistic regression analysis was used to identify predictors of 30-day mortality. RESULTS: Of 458 patients 71% were male. The median age was 50.5 years (inter-quartile range [IQR] 32.2-67.7), median Injury Severity Score (ISS) was 14 (IQR 9-20) and median Glasgow Coma Score (GCS) was 15 (IQR 14-15). The ISS was >15 for 47%, and 14% had an ISS >25. A total of 17 patients (3.7%) died within 30 days of trauma. All deaths were in patients with ISS >15. Most injuries were due to falls <2 m (35%) or road traffic accidents (29%). Injuries to the head (39%) were followed by injuries to the lower limbs (33%), spine (28%) and chest (27%). The time of admission peaked between 12:00 and 22:00, with a second peak between 00:00 and 02:00. A total of 64% of patients were admitted directly to our trauma centre. The median time to CT was 30 min (IQR 18-54 min). Using multivariable regression analysis, the predictors of mortality were older age, higher ISS and lower GCS. CONCLUSIONS: Characteristics of Swiss trauma patients derived from TARN were described for the first time, providing a detailed overview of the institutional trauma population. Based on these results, patient management and hospital resources (e.g. triage of patients, time to CT, staffing during night shifts) could be evaluated as a further step.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Intravital imaging has revealed that T cells change their migratory behavior during physiological activation inside lymphoid tissue. Yet, it remains less well investigated how the intrinsic migratory capacity of activated T cells is regulated by chemokine receptor levels or other regulatory elements. Here, we used an adjuvant-driven inflammation model to examine how motility patterns corresponded with CCR7, CXCR4, and CXCR5 expression levels on ovalbumin-specific DO11.10 CD4(+) T cells in draining lymph nodes. We found that while CCR7 and CXCR4 surface levels remained essentially unaltered during the first 48-72 h after activation of CD4(+) T cells, their in vitro chemokinetic and directed migratory capacity to the respective ligands, CCL19, CCL21, and CXCL12, was substantially reduced during this time window. Activated T cells recovered from this temporary decrease in motility on day 6 post immunization, coinciding with increased migration to the CXCR5 ligand CXCL13. The transiently impaired CD4(+) T cell motility pattern correlated with increased LFA-1 expression and augmented phosphorylation of the microtubule regulator Stathmin on day 3 post immunization, yet neither microtubule destabilization nor integrin blocking could reverse TCR-imprinted unresponsiveness. Furthermore, protein kinase C (PKC) inhibition did not restore chemotactic activity, ruling out PKC-mediated receptor desensitization as mechanism for reduced migration in activated T cells. Thus, we identify a cell-intrinsic, chemokine receptor level-uncoupled decrease in motility in CD4(+) T cells shortly after activation, coinciding with clonal expansion. The transiently reduced ability to react to chemokinetic and chemotactic stimuli may contribute to the sequestering of activated CD4(+) T cells in reactive peripheral lymph nodes, allowing for integration of costimulatory signals required for full activation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multi-dimensional Bayesian network classifiers (MBCs) are probabilistic graphical models recently proposed to deal with multi-dimensional classification problems, where each instance in the data set has to be assigned to more than one class variable. In this paper, we propose a Markov blanket-based approach for learning MBCs from data. Basically, it consists of determining the Markov blanket around each class variable using the HITON algorithm, then specifying the directionality over the MBC subgraphs. Our approach is applied to the prediction problem of the European Quality of Life-5 Dimensions (EQ-5D) from the 39-item Parkinson’s Disease Questionnaire (PDQ-39) in order to estimate the health-related quality of life of Parkinson’s patients. Fivefold cross-validation experiments were carried out on randomly generated synthetic data sets, Yeast data set, as well as on a real-world Parkinson’s disease data set containing 488 patients. The experimental study, including comparison with additional Bayesian network-based approaches, back propagation for multi-label learning, multi-label k-nearest neighbor, multinomial logistic regression, ordinary least squares, and censored least absolute deviations, shows encouraging results in terms of predictive accuracy as well as the identification of dependence relationships among class and feature variables.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hoy en día, con la evolución continua y rápida de las tecnologías de la información y los dispositivos de computación, se recogen y almacenan continuamente grandes volúmenes de datos en distintos dominios y a través de diversas aplicaciones del mundo real. La extracción de conocimiento útil de una cantidad tan enorme de datos no se puede realizar habitualmente de forma manual, y requiere el uso de técnicas adecuadas de aprendizaje automático y de minería de datos. La clasificación es una de las técnicas más importantes que ha sido aplicada con éxito a varias áreas. En general, la clasificación se compone de dos pasos principales: en primer lugar, aprender un modelo de clasificación o clasificador a partir de un conjunto de datos de entrenamiento, y en segundo lugar, clasificar las nuevas instancias de datos utilizando el clasificador aprendido. La clasificación es supervisada cuando todas las etiquetas están presentes en los datos de entrenamiento (es decir, datos completamente etiquetados), semi-supervisada cuando sólo algunas etiquetas son conocidas (es decir, datos parcialmente etiquetados), y no supervisada cuando todas las etiquetas están ausentes en los datos de entrenamiento (es decir, datos no etiquetados). Además, aparte de esta taxonomía, el problema de clasificación se puede categorizar en unidimensional o multidimensional en función del número de variables clase, una o más, respectivamente; o también puede ser categorizado en estacionario o cambiante con el tiempo en función de las características de los datos y de la tasa de cambio subyacente. A lo largo de esta tesis, tratamos el problema de clasificación desde tres perspectivas diferentes, a saber, clasificación supervisada multidimensional estacionaria, clasificación semisupervisada unidimensional cambiante con el tiempo, y clasificación supervisada multidimensional cambiante con el tiempo. Para llevar a cabo esta tarea, hemos usado básicamente los clasificadores Bayesianos como modelos. La primera contribución, dirigiéndose al problema de clasificación supervisada multidimensional estacionaria, se compone de dos nuevos métodos de aprendizaje de clasificadores Bayesianos multidimensionales a partir de datos estacionarios. Los métodos se proponen desde dos puntos de vista diferentes. El primer método, denominado CB-MBC, se basa en una estrategia de envoltura de selección de variables que es voraz y hacia delante, mientras que el segundo, denominado MB-MBC, es una estrategia de filtrado de variables con una aproximación basada en restricciones y en el manto de Markov. Ambos métodos han sido aplicados a dos problemas reales importantes, a saber, la predicción de los inhibidores de la transcriptasa inversa y de la proteasa para el problema de infección por el virus de la inmunodeficiencia humana tipo 1 (HIV-1), y la predicción del European Quality of Life-5 Dimensions (EQ-5D) a partir de los cuestionarios de la enfermedad de Parkinson con 39 ítems (PDQ-39). El estudio experimental incluye comparaciones de CB-MBC y MB-MBC con los métodos del estado del arte de la clasificación multidimensional, así como con métodos comúnmente utilizados para resolver el problema de predicción de la enfermedad de Parkinson, a saber, la regresión logística multinomial, mínimos cuadrados ordinarios, y mínimas desviaciones absolutas censuradas. En ambas aplicaciones, los resultados han sido prometedores con respecto a la precisión de la clasificación, así como en relación al análisis de las estructuras gráficas que identifican interacciones conocidas y novedosas entre las variables. La segunda contribución, referida al problema de clasificación semi-supervisada unidimensional cambiante con el tiempo, consiste en un método nuevo (CPL-DS) para clasificar flujos de datos parcialmente etiquetados. Los flujos de datos difieren de los conjuntos de datos estacionarios en su proceso de generación muy rápido y en su aspecto de cambio de concepto. Es decir, los conceptos aprendidos y/o la distribución subyacente están probablemente cambiando y evolucionando en el tiempo, lo que hace que el modelo de clasificación actual sea obsoleto y deba ser actualizado. CPL-DS utiliza la divergencia de Kullback-Leibler y el método de bootstrapping para cuantificar y detectar tres tipos posibles de cambio: en las predictoras, en la a posteriori de la clase o en ambas. Después, si se detecta cualquier cambio, un nuevo modelo de clasificación se aprende usando el algoritmo EM; si no, el modelo de clasificación actual se mantiene sin modificaciones. CPL-DS es general, ya que puede ser aplicado a varios modelos de clasificación. Usando dos modelos diferentes, el clasificador naive Bayes y la regresión logística, CPL-DS se ha probado con flujos de datos sintéticos y también se ha aplicado al problema real de la detección de código malware, en el cual los nuevos ficheros recibidos deben ser continuamente clasificados en malware o goodware. Los resultados experimentales muestran que nuestro método es efectivo para la detección de diferentes tipos de cambio a partir de los flujos de datos parcialmente etiquetados y también tiene una buena precisión de la clasificación. Finalmente, la tercera contribución, sobre el problema de clasificación supervisada multidimensional cambiante con el tiempo, consiste en dos métodos adaptativos, a saber, Locally Adpative-MB-MBC (LA-MB-MBC) y Globally Adpative-MB-MBC (GA-MB-MBC). Ambos métodos monitorizan el cambio de concepto a lo largo del tiempo utilizando la log-verosimilitud media como métrica y el test de Page-Hinkley. Luego, si se detecta un cambio de concepto, LA-MB-MBC adapta el actual clasificador Bayesiano multidimensional localmente alrededor de cada nodo cambiado, mientras que GA-MB-MBC aprende un nuevo clasificador Bayesiano multidimensional. El estudio experimental realizado usando flujos de datos sintéticos multidimensionales indica los méritos de los métodos adaptativos propuestos. ABSTRACT Nowadays, with the ongoing and rapid evolution of information technology and computing devices, large volumes of data are continuously collected and stored in different domains and through various real-world applications. Extracting useful knowledge from such a huge amount of data usually cannot be performed manually, and requires the use of adequate machine learning and data mining techniques. Classification is one of the most important techniques that has been successfully applied to several areas. Roughly speaking, classification consists of two main steps: first, learn a classification model or classifier from an available training data, and secondly, classify the new incoming unseen data instances using the learned classifier. Classification is supervised when the whole class values are present in the training data (i.e., fully labeled data), semi-supervised when only some class values are known (i.e., partially labeled data), and unsupervised when the whole class values are missing in the training data (i.e., unlabeled data). In addition, besides this taxonomy, the classification problem can be categorized into uni-dimensional or multi-dimensional depending on the number of class variables, one or more, respectively; or can be also categorized into stationary or streaming depending on the characteristics of the data and the rate of change underlying it. Through this thesis, we deal with the classification problem under three different settings, namely, supervised multi-dimensional stationary classification, semi-supervised unidimensional streaming classification, and supervised multi-dimensional streaming classification. To accomplish this task, we basically used Bayesian network classifiers as models. The first contribution, addressing the supervised multi-dimensional stationary classification problem, consists of two new methods for learning multi-dimensional Bayesian network classifiers from stationary data. They are proposed from two different points of view. The first method, named CB-MBC, is based on a wrapper greedy forward selection approach, while the second one, named MB-MBC, is a filter constraint-based approach based on Markov blankets. Both methods are applied to two important real-world problems, namely, the prediction of the human immunodeficiency virus type 1 (HIV-1) reverse transcriptase and protease inhibitors, and the prediction of the European Quality of Life-5 Dimensions (EQ-5D) from 39-item Parkinson’s Disease Questionnaire (PDQ-39). The experimental study includes comparisons of CB-MBC and MB-MBC against state-of-the-art multi-dimensional classification methods, as well as against commonly used methods for solving the Parkinson’s disease prediction problem, namely, multinomial logistic regression, ordinary least squares, and censored least absolute deviations. For both considered case studies, results are promising in terms of classification accuracy as well as regarding the analysis of the learned MBC graphical structures identifying known and novel interactions among variables. The second contribution, addressing the semi-supervised uni-dimensional streaming classification problem, consists of a novel method (CPL-DS) for classifying partially labeled data streams. Data streams differ from the stationary data sets by their highly rapid generation process and their concept-drifting aspect. That is, the learned concepts and/or the underlying distribution are likely changing and evolving over time, which makes the current classification model out-of-date requiring to be updated. CPL-DS uses the Kullback-Leibler divergence and bootstrapping method to quantify and detect three possible kinds of drift: feature, conditional or dual. Then, if any occurs, a new classification model is learned using the expectation-maximization algorithm; otherwise, the current classification model is kept unchanged. CPL-DS is general as it can be applied to several classification models. Using two different models, namely, naive Bayes classifier and logistic regression, CPL-DS is tested with synthetic data streams and applied to the real-world problem of malware detection, where the new received files should be continuously classified into malware or goodware. Experimental results show that our approach is effective for detecting different kinds of drift from partially labeled data streams, as well as having a good classification performance. Finally, the third contribution, addressing the supervised multi-dimensional streaming classification problem, consists of two adaptive methods, namely, Locally Adaptive-MB-MBC (LA-MB-MBC) and Globally Adaptive-MB-MBC (GA-MB-MBC). Both methods monitor the concept drift over time using the average log-likelihood score and the Page-Hinkley test. Then, if a drift is detected, LA-MB-MBC adapts the current multi-dimensional Bayesian network classifier locally around each changed node, whereas GA-MB-MBC learns a new multi-dimensional Bayesian network classifier from scratch. Experimental study carried out using synthetic multi-dimensional data streams shows the merits of both proposed adaptive methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to present a simulation‐based evaluation method for the comparison of different organizational forms and software support levels in the field of supply chain management (SCM). Design/methodology/approach – Apart from widely known logistic performance indicators, the discrete event simulation model considers explicitly coordination cost as stemming from iterative administration procedures. Findings - The method is applied to an exemplary supply chain configuration considering various parameter settings. Curiously, additional coordination cost does not always result in improved logistic performance. Influence factor variations lead to different organizational recommendations. The results confirm the high importance of (up to now) disregarded dimensions when evaluating SCM concepts and IT tools. Research limitations/implications – The model is based on simplified product and network structures. Future research shall include more complex, real world configurations. Practical implications – The developed method is designed for the identification of improvement potential when SCM software is employed. Coordination schemes based only on ERP systems are valid alternatives in industrial practice because significant investment IT can be avoided. Therefore, the evaluation of these coordination procedures, in particular the cost due to iterations, is of high managerial interest and the method provides a comprehensive tool for strategic IT decision making. Originality/value – Reviewed literature is mostly focused on the benefits of SCM software implementations. However, ERP system based supply chain coordination is still widespread industrial practice but associated coordination cost has not been addressed by researchers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we study Forward Osmosis (FO) as an emerging desalination technology, and its capability to replace totally or partially Reverse Osmosis (RO) in order to reduce the great amount of energy required in the current desalination plants. For this purpose, we propose a superstructure that includes both membrane based desalination technologies, allowing the selection of only one of the technologies or a combination of both of them seeking for the optimal configuration of the network. The optimization problem is solved for a seawater desalination plant with a given fresh water production. The results obtained show that the optimal solution combines both desalination technologies to reduce not only the energy consumption but also the total cost of the desalination process in comparison with the same plant but operating only with RO.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

By integrating the research and resources of hundreds of scientists from dozens of institutions, network-level science is fast becoming one scientific model of choice to address complex problems. In the pursuit to confront pressing environmental issues such as climate change, many scientists, practitioners, policy makers, and institutions are promoting network-level research that integrates the social and ecological sciences. To understand how this scientific trend is unfolding among rising scientists, we examined how graduate students experienced one such emergent social-ecological research initiative, Integrated Science for Society and Environment, within the large-scale, geographically distributed Long Term Ecological Research (LTER) Network. Through workshops, surveys, and interviews, we found that graduate students faced challenges in how they conceptualized and practiced social-ecological research within the LTER Network. We have presented these conceptual challenges at three scales: the individual/project, the LTER site, and the LTER Network. The level of student engagement with and knowledge of the LTER Network was varied, and students faced different institutional, cultural, and logistic barriers to practicing social-ecological research. These types of challenges are unlikely to be unique to LTER graduate students; thus, our findings are relevant to other scientific networks implementing new social-ecological research initiatives.