24 resultados para Informatics Engineering
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
Currently, the acoustic and nanoindentation techniques are two of the most used techniques for material elastic modulus measurement. In this article fundamental principles and limitations of both techniques are shown and discussed. Last advances in nanoindentation technique are also reviewed. An experimental study in ceramic, metallic, composite and single crystals was also done. Results shown that ultrasonic technique is capable to provide results in agreement with those reported in literature. However, ultrasonic technique does not allow measuring the elastic modulus of some small samples and single crystals. On the other hand, the nanoindentation technique estimates the elastic modulus values in reasonable agreement with those measured by acoustic methods, particularly in amorphous materials, while in some policristaline materials some deviation from expected values was obtained.
Resumo:
We present a scheme for quasiperfect transfer of polariton states from a sender to a spatially separated receiver, both composed of high-quality cavities filled by atomic samples. The sender and the receiver are connected by a nonideal transmission channel -the data bus- modelled by a network of lossy empty cavities. In particular, we analyze the influence of a large class of data-bus topologies on the fidelity and transfer time of the polariton state. Moreover, we also assume dispersive couplings between the polariton fields and the data-bus normal modes in order to achieve a tunneling-like state transfer. Such a tunneling-transfer mechanism, by which the excitation energy of the polariton effectively does not populate the data-bus cavities, is capable of attenuating appreciably the dissipative effects of the data-bus cavities. After deriving a Hamiltonian for the effective coupling between the sender and the receiver, we show that the decay rate of the fidelity is proportional to a cooperativity parameter that weighs the cost of the dissipation rate against the benefit of the effective coupling strength. The increase of the fidelity of the transfer process can be achieved at the expense of longer transfer times. We also show that the dependence of both the fidelity and the transfer time on the network topology is analyzed in detail for distinct regimes of parameters. It follows that the data-bus topology can be explored to control the time of the state-transfer process.
Resumo:
The mapping, exact or approximate, of a many-body problem onto an effective single-body problem is one of the most widely used conceptual and computational tools of physics. Here, we propose and investigate the inverse map of effective approximate single-particle equations onto the corresponding many-particle system. This approach allows us to understand which interacting system a given single-particle approximation is actually describing, and how far this is from the original physical many-body system. We illustrate the resulting reverse engineering process by means of the Kohn-Sham equations of density-functional theory. In this application, our procedure sheds light on the nonlocality of the density-potential mapping of density-functional theory, and on the self-interaction error inherent in approximate density functionals.
Resumo:
Objective: We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. Methods and materials: The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely. Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. Results: We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Conclusions: Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Research Foundation of the State of Sao Paulo (FAPESP)
Resumo:
State of Sao Paulo Research Foundation (FAPESP)
Resumo:
This paper presents results of research into the use of the Bellman-Zadeh approach to decision making in a fuzzy environment for solving multicriteria power engineering problems. The application of the approach conforms to the principle of guaranteed result and provides constructive lines in computationally effective obtaining harmonious solutions on the basis of solving associated maxmin problems. The presented results are universally applicable and are already being used to solve diverse classes of power engineering problems. It is illustrated by considering problems of power and energy shortage allocation, power system operation, optimization of network configuration in distribution systems, and energetically effective voltage control in distribution systems. (c) 2011 Elsevier Ltd. All rights reserved.
Resumo:
This work describes the development of an engineering approach based upon a toughness scaling methodology incorporating the effects of weld strength mismatch on crack-tip driving forces. The approach adopts a nondimensional Weibull stress, (sigma) over bar (w), as a the near-tip driving force to correlate cleavage fracture across cracked weld configurations with different mismatch conditions even though the loading parameter (measured by J) may vary widely due to mismatch and constraint variations. Application of the procedure to predict the failure strain for an overmatch girth weld made of an API X80 pipeline steel demonstrates the effectiveness of the micromechanics approach. Overall, the results lend strong support to use a Weibull stress based procedure in defect assessments of structural welds.
Resumo:
This paper is a study of various electric signals, which have been employed throughout the history of communication engineering in its two main landmarks: the telegraph and the telephone. The signals are presented in their time and frequency domain representations. The historical order has been followed in the presentation: wired systems, spark gap wireless, continuous wave (CW) and amplitude modulation (AM), detection by rectification, and frequency modulation (FM). The analysis of these signals is meant to lead into a better understanding of the evolution of communication technology. The material presented in this work could be used to illustrate ""Signals and Systems"" and ""Communication Systems"" courses by taking advantage of its technical as well as historical contents.
Resumo:
The `biomimetic` approach to tissue engineering usually involves the use of a bioreactor mimicking physiological parameters whilst supplying nutrients to the developing tissue. Here we present a new heart valve bioreactor, having as its centrepiece a ventricular assist device (VAD), which exposes the cell-scaffold constructs to a wider array of mechanical forces. The pump of the VAD has two chambers: a blood and a pneumatic chamber, separated by an elastic membrane. Pulsatile air-pressure is generated by a piston-type actuator and delivered to the pneumatic chamber, ejecting the fluid in the blood chamber. Subsequently, applied vacuum to the pneumatic chamber causes the blood chamber to fill. A mechanical heart valve was placed in the VAD`s inflow position. The tissue engineered (TE) valve was placed in the outflow position. The VAD was coupled in series with a Windkessel compliance chamber, variable throttle and reservoir, connected by silicone tubings. The reservoir sat on an elevated platform, allowing adjustment of ventricular preload between 0 and 11 mmHg. To allow for sterile gaseous exchange between the circuit interior and exterior, a 0.2 mu m filter was placed at the reservoir. Pressure and flow were registered downstream of the TE valve. The circuit was filled with culture medium and fitted in a standard 5% CO(2) incubator set at 37 degrees C. Pressure and flow waveforms were similar to those obtained under physiological conditions for the pulmonary circulation. The `cardiomimetic` approach presented here represents a new perspective to conventional biomimetic approaches in TE, with potential advantages. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
Objective: The aim of this article is to propose an integrated framework for extracting and describing patterns of disorders from medical images using a combination of linear discriminant analysis and active contour models. Methods: A multivariate statistical methodology was first used to identify the most discriminating hyperplane separating two groups of images (from healthy controls and patients with schizophrenia) contained in the input data. After this, the present work makes explicit the differences found by the multivariate statistical method by subtracting the discriminant models of controls and patients, weighted by the pooled variance between the two groups. A variational level-set technique was used to segment clusters of these differences. We obtain a label of each anatomical change using the Talairach atlas. Results: In this work all the data was analysed simultaneously rather than assuming a priori regions of interest. As a consequence of this, by using active contour models, we were able to obtain regions of interest that were emergent from the data. The results were evaluated using, as gold standard, well-known facts about the neuroanatomical changes related to schizophrenia. Most of the items in the gold standard was covered in our result set. Conclusions: We argue that such investigation provides a suitable framework for characterising the high complexity of magnetic resonance images in schizophrenia as the results obtained indicate a high sensitivity rate with respect to the gold standard. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Objective: To develop a model to predict the bleeding source and identify the cohort amongst patients with acute gastrointestinal bleeding (GIB) who require urgent intervention, including endoscopy. Patients with acute GIB, an unpredictable event, are most commonly evaluated and managed by non-gastroenterologists. Rapid and consistently reliable risk stratification of patients with acute GIB for urgent endoscopy may potentially improve outcomes amongst such patients by targeting scarce health-care resources to those who need it the most. Design and methods: Using ICD-9 codes for acute GIB, 189 patients with acute GIB and all. available data variables required to develop and test models were identified from a hospital medical records database. Data on 122 patients was utilized for development of the model and on 67 patients utilized to perform comparative analysis of the models. Clinical data such as presenting signs and symptoms, demographic data, presence of co-morbidities, laboratory data and corresponding endoscopic diagnosis and outcomes were collected. Clinical data and endoscopic diagnosis collected for each patient was utilized to retrospectively ascertain optimal management for each patient. Clinical presentations and corresponding treatment was utilized as training examples. Eight mathematical models including artificial neural network (ANN), support vector machine (SVM), k-nearest neighbor, linear discriminant analysis (LDA), shrunken centroid (SC), random forest (RF), logistic regression, and boosting were trained and tested. The performance of these models was compared using standard statistical analysis and ROC curves. Results: Overall the random forest model best predicted the source, need for resuscitation, and disposition with accuracies of approximately 80% or higher (accuracy for endoscopy was greater than 75%). The area under ROC curve for RF was greater than 0.85, indicating excellent performance by the random forest model Conclusion: While most mathematical models are effective as a decision support system for evaluation and management of patients with acute GIB, in our testing, the RF model consistently demonstrated the best performance. Amongst patients presenting with acute GIB, mathematical models may facilitate the identification of the source of GIB, need for intervention and allow optimization of care and healthcare resource allocation; these however require further validation. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
Our long-term objective is to devise reliable methods to generate biological replacement teeth exhibiting the physical properties and functions of naturally formed human teeth. Previously, we demonstrated the successful use of tissue engineering approaches to generate small, bioengineered tooth crowns from harvested pig and rat postnatal dental stem cells (DSCs). To facilitate characterizations of human DSCs, we have developed a novel radiographic staging system to accurately correlate human third molar tooth developmental stage with anticipated harvested DSC yield. Our results demonstrated that DSC yields were higher in less developed teeth (Stages 1 and 2), and lower in more developed teeth (Stages 3, 4, and 5). The greatest cell yields and colony-forming units (CFUs) capability was obtained from Stages 1 and 2 tooth dental pulp. We conclude that radiographic developmental staging can be used to accurately assess the utility of harvested human teeth for future dental tissue engineering applications.
Resumo:
In this work, thermodynamic models for fitting the phase equilibrium of binary systems were applied, aiming to predict the high pressure phase equilibrium of multicomponent systems of interest in the food engineering field, comparing the results generated by the models with new experimental data and with those from the literature. Two mixing rules were used with the Peng-Robinson equation of state, one with the mixing rule of van der Waals and the other with the composition-dependent mixing rule of Mathias et al. The systems chosen are of fundamental importance in food industries, such as the binary systems CO(2)-limonene, CO(2)-citral and CO(2)-linalool, and the ternary systems CO(2)-Limonene-Citral and CO(2)-Limonene-Linalool, where high pressure phase equilibrium knowledge is important to extract and fractionate citrus fruit essential oils. For the CO(2)-limonene system, some experimental data were also measured in this work. The results showed the high capability of the model using the composition-dependent mixing rule to model the phase equilibrium behavior of these systems.