790 resultados para Object-based Classification
Resumo:
Tese de Doutoramento em Ciências da Comunicação - Especialidade em Comunicação Estratégica e Organizacional
Resumo:
Programa Doutoral em Engenharia Biomédica
Resumo:
The supercritical fluid technology has been target of many pharmaceuticals investigations in particles production for almost 35 years. This is due to the great advantages it offers over others technologies currently used for the same purpose. A brief history is presented, as well the classification of supercritical technology based on the role that the supercritical fluid (carbon dioxide) performs in the process.
Resumo:
Dissertação de mestrado integrado em Engenharia Biomédica (área de especialização em Informática Médica)
Resumo:
"A workshop within the 19th International Conference on Applications and Theory of Petri Nets - ICATPN’1998"
Resumo:
The main purpose of the poster is to present how the Unified Modeling Language (UML) can be used for diagnosing and optimizing real industrial production systems. By using a car radios production line as a case study, the poster shows the modeling process that can be followed during the analysis phase of complex control applications. In order to guarantee the continuity mapping of the models, the authors propose some guidelines to transform the use cases diagrams into a single object diagram, which is the main diagram for the next phases of the development.
Resumo:
This paper discusses how object-oriented iuheritance can be re-interpreted if statecharts are used for modelling the dynamic behaviour of an object. The support of inheritance of statecharts allows the improvement of systems' development by easing the reutilization of parts of already developed euccessful systems, aad by promoting the iterative and continuous models' refinement advocated by the operatioaal approach. Statechart is the formalism used within UML to specify reactive state.based behaviours. This paper covers the use of statecharts within the modelling of embedded systems for industrial control applxications, where performance and memory usage are main concerns.
Resumo:
This research studies the phenomenon of national and corporate culture. National culture is the culture the members of a country share and corporate culture is a subculture which members of an organisation share (Schein, 1992). The objective of this research is to reveal if the employees within equivalent Irish and American companies share the same corporate and national culture and to ascertain if, within each company, there is a link between national culture and corporate culture. The object of this study is achieved by replicating research which was conducted by Shing (1997) in Taiwan. Hypotheses and analytical tools developed by Shing are employed in the current study to allow comparison of results between Shing’s study and the current study. The methodology used, called for the measurement and comparison of national and corporate culture in two equivalent companies within the same industry. The two companies involved in this study are both located in Ireland and are of American and Irish origin. A sample of three hundred was selected and the response rate was 54%. The findings from this research are: (1) The two companies involved had different corporate cultures, (2) They had the same national culture, (3) There was no link between national culture and corporate culture within either company, (4) The findings were not similar to those of Shing (1997). The implication of these findings is that national and corporate culture are separate phenomena therefore corporate culture is not a response to national culture. The results of this research are not reflected in the finding’s of Shing (1997), therefore they are context specific. The core recommendation for management is that, corporate culture should take account of national culture. This is because although employees recognise the espoused values of corporate culture (Schein, 1992), they are at the same time influenced by a much stronger force, their national culture.
Validation of the Killip-Kimball Classification and Late Mortality after Acute Myocardial Infarction
Resumo:
Background: The classification or index of heart failure severity in patients with acute myocardial infarction (AMI) was proposed by Killip and Kimball aiming at assessing the risk of in-hospital death and the potential benefit of specific management of care provided in Coronary Care Units (CCU) during the decade of 60. Objective: To validate the risk stratification of Killip classification in the long-term mortality and compare the prognostic value in patients with non-ST-segment elevation MI (NSTEMI) relative to patients with ST-segment elevation MI (STEMI), in the era of reperfusion and modern antithrombotic therapies. Methods: We evaluated 1906 patients with documented AMI and admitted to the CCU, from 1995 to 2011, with a mean follow-up of 05 years to assess total mortality. Kaplan-Meier (KM) curves were developed for comparison between survival distributions according to Killip class and NSTEMI versus STEMI. Cox proportional regression models were developed to determine the independent association between Killip class and mortality, with sensitivity analyses based on type of AMI. Results: The proportions of deaths and the KM survival distributions were significantly different across Killip class >1 (p <0.001) and with a similar pattern between patients with NSTEMI and STEMI. Cox models identified the Killip classification as a significant, sustained, consistent predictor and independent of relevant covariables (Wald χ2 16.5 [p = 0.001], NSTEMI) and (Wald χ2 11.9 [p = 0.008], STEMI). Conclusion: The Killip and Kimball classification performs relevant prognostic role in mortality at mean follow-up of 05 years post-AMI, with a similar pattern between NSTEMI and STEMI patients.
Resumo:
n.s. no.29(1994)
Resumo:
Gestures are the first forms of conventional communication that young children develop in order to intentionally convey a specific message. However, at first, infants rarely communicate successfully with their gestures, prompting caregivers to interpret them. Although the role of caregivers in early communication development has been examined, little is known about how caregivers attribute a specific communicative function to infants' gestures. In this study, we argue that caregivers rely on the knowledge about the referent that is shared with infants in order to interpret what communicative function infants wish to convey with their gestures. We videotaped interactions from six caregiver-infant dyads playing with toys when infants were 8, 10, 12, 14, and 16 months old. We coded infants' gesture production and we determined whether caregivers interpreted those gestures as conveying a clear communicative function or not; we also coded whether infants used objects according to their conventions of use as a measure of shared knowledge about the referent. Results revealed an association between infants' increasing knowledge of object use and maternal interpretations of infants' gestures as conveying a clear communicative function. Our findings emphasize the importance of shared knowledge in shaping infants' emergent communicative skills.
Resumo:
BACKGROUND: This study describes the prevalence, associated anomalies, and demographic characteristics of cases of multiple congenital anomalies (MCA) in 19 population-based European registries (EUROCAT) covering 959,446 births in 2004 and 2010. METHODS: EUROCAT implemented a computer algorithm for classification of congenital anomaly cases followed by manual review of potential MCA cases by geneticists. MCA cases are defined as cases with two or more major anomalies of different organ systems, excluding sequences, chromosomal and monogenic syndromes. RESULTS: The combination of an epidemiological and clinical approach for classification of cases has improved the quality and accuracy of the MCA data. Total prevalence of MCA cases was 15.8 per 10,000 births. Fetal deaths and termination of pregnancy were significantly more frequent in MCA cases compared with isolated cases (p < 0.001) and MCA cases were more frequently prenatally diagnosed (p < 0.001). Live born infants with MCA were more often born preterm (p < 0.01) and with birth weight < 2500 grams (p < 0.01). Respiratory and ear, face, and neck anomalies were the most likely to occur with other anomalies (34% and 32%) and congenital heart defects and limb anomalies were the least likely to occur with other anomalies (13%) (p < 0.01). However, due to their high prevalence, congenital heart defects were present in half of all MCA cases. Among males with MCA, the frequency of genital anomalies was significantly greater than the frequency of genital anomalies among females with MCA (p < 0.001). CONCLUSION: Although rare, MCA cases are an important public health issue, because of their severity. The EUROCAT database of MCA cases will allow future investigation on the epidemiology of these conditions and related clinical and diagnostic problems.
Resumo:
Difficult tracheal intubation assessment is an important research topic in anesthesia as failed intubations are important causes of mortality in anesthetic practice. The modified Mallampati score is widely used, alone or in conjunction with other criteria, to predict the difficulty of intubation. This work presents an automatic method to assess the modified Mallampati score from an image of a patient with the mouth wide open. For this purpose we propose an active appearance models (AAM) based method and use linear support vector machines (SVM) to select a subset of relevant features obtained using the AAM. This feature selection step proves to be essential as it improves drastically the performance of classification, which is obtained using SVM with RBF kernel and majority voting. We test our method on images of 100 patients undergoing elective surgery and achieve 97.9% accuracy in the leave-one-out crossvalidation test and provide a key element to an automatic difficult intubation assessment system.
Resumo:
The aim of this study is to perform a thorough comparison of quantitative susceptibility mapping (QSM) techniques and their dependence on the assumptions made. The compared methodologies were: two iterative single orientation methodologies minimizing the l2, l1TV norm of the prior knowledge of the edges of the object, one over-determined multiple orientation method (COSMOS) and anewly proposed modulated closed-form solution (MCF). The performance of these methods was compared using a numerical phantom and in-vivo high resolution (0.65mm isotropic) brain data acquired at 7T using a new coil combination method. For all QSM methods, the relevant regularization and prior-knowledge parameters were systematically changed in order to evaluate the optimal reconstruction in the presence and absence of a ground truth. Additionally, the QSM contrast was compared to conventional gradient recalled echo (GRE) magnitude and R2* maps obtained from the same dataset. The QSM reconstruction results of the single orientation methods show comparable performance. The MCF method has the highest correlation (corrMCF=0.95, r(2)MCF =0.97) with the state of the art method (COSMOS) with additional advantage of extreme fast computation time. The l-curve method gave the visually most satisfactory balance between reduction of streaking artifacts and over-regularization with the latter being overemphasized when the using the COSMOS susceptibility maps as ground-truth. R2* and susceptibility maps, when calculated from the same datasets, although based on distinct features of the data, have a comparable ability to distinguish deep gray matter structures.