968 resultados para Interdisciplinary approach to knowledge
Resumo:
In spite of the high prevalence and negative impact of depression, little is known about its pathophysiology. Basic research on depression needs new animal models in order to increase knowledge of the disease and search for new therapies. The work presented here aims to provide a neurobiologically validated model for investigating the relationships among sickness behavior, antidepressants treatment, and social dominance behavior. For this purpose, dominant individuals from dyads of male Swiss mice were treated with the bacterial endotoxin lipopolysaccharide (LPS) to induce social hierarchy destabilization. Two groups were treated with the antidepressants imipramine and fluoxetine prior to LPS administration. In these groups, antidepressant treatment prevented the occurrence of social destabilization. These results indicate that this model could be useful in providing new insights into the understanding of the brain systems involved in depression.
Resumo:
Abstract Background Recent medical and biological technology advances have stimulated the development of new testing systems that have been providing huge, varied amounts of molecular and clinical data. Growing data volumes pose significant challenges for information processing systems in research centers. Additionally, the routines of genomics laboratory are typically characterized by high parallelism in testing and constant procedure changes. Results This paper describes a formal approach to address this challenge through the implementation of a genetic testing management system applied to human genome laboratory. We introduced the Human Genome Research Center Information System (CEGH) in Brazil, a system that is able to support constant changes in human genome testing and can provide patients updated results based on the most recent and validated genetic knowledge. Our approach uses a common repository for process planning to ensure reusability, specification, instantiation, monitoring, and execution of processes, which are defined using a relational database and rigorous control flow specifications based on process algebra (ACP). The main difference between our approach and related works is that we were able to join two important aspects: 1) process scalability achieved through relational database implementation, and 2) correctness of processes using process algebra. Furthermore, the software allows end users to define genetic testing without requiring any knowledge about business process notation or process algebra. Conclusions This paper presents the CEGH information system that is a Laboratory Information Management System (LIMS) based on a formal framework to support genetic testing management for Mendelian disorder studies. We have proved the feasibility and showed usability benefits of a rigorous approach that is able to specify, validate, and perform genetic testing using easy end user interfaces.
Resumo:
Remanufacturing is the process of rebuilding used products that ensures that the quality of remanufactured products is equivalent to that of new ones. Although the theme is gaining ground, it is still little explored due to lack of knowledge, the difficulty of visualizing it systemically, and implementing it effectively. Few models treat remanufacturing as a system. Most of the studies still treated remanufacturing as an isolated process, preventing it from being seen in an integrated manner. Therefore, the aim of this work is to organize the knowledge about remanufacturing, offering a vision of remanufacturing system and contributing to an integrated view about the theme. The methodology employed was a literature review, adopting the General Theory of Systems to characterize the remanufacturing system. This work consolidates and organizes the elements of this system, enabling a better understanding of remanufacturing and assisting companies in adopting the concept.
Resumo:
Electronic business surely represents the new development perspective for world-wide trade. Together with the idea of ebusiness, and the exigency to exchange business messages between trading partners, the concept of business-to-business (B2B) integration arouse. B2B integration is becoming necessary to allow partners to communicate and exchange business documents, like catalogues, purchase orders, reports and invoices, overcoming architectural, applicative, and semantic differences, according to the business processes implemented by each enterprise. Business relationships can be very heterogeneous, and consequently there are variousways to integrate enterprises with each other. Moreover nowadays not only large enterprises, but also the small- and medium- enterprises are moving towards ebusiness: more than two-thirds of Small and Medium Enterprises (SMEs) use the Internet as a business tool. One of the business areas which is actively facing the interoperability problem is that related with the supply chain management. In order to really allow the SMEs to improve their business and to fully exploit ICT technologies in their business transactions, there are three main players that must be considered and joined: the new emerging ICT technologies, the scenario and the requirements of the enterprises and the world of standards and standardisation bodies. This thesis presents the definition and the development of an interoperability framework (and the bounded standardisation intiatives) to provide the Textile/Clothing sectorwith a shared set of business documents and protocols for electronic transactions. Considering also some limitations, the thesis proposes a ontology-based approach to improve the functionalities of the developed framework and, exploiting the technologies of the semantic web, to improve the standardisation life-cycle, intended as the development, dissemination and adoption of B2B protocols for specific business domain. The use of ontologies allows the semantic modellisation of knowledge domains, upon which it is possible to develop a set of components for a better management of B2B protocols, and to ease their comprehension and adoption for the target users.
Resumo:
Motion control is a sub-field of automation, in which the position and/or velocity of machines are controlled using some type of device. In motion control the position, velocity, force, pressure, etc., profiles are designed in such a way that the different mechanical parts work as an harmonious whole in which a perfect synchronization must be achieved. The real-time exchange of information in the distributed system that is nowadays an industrial plant plays an important role in order to achieve always better performance, better effectiveness and better safety. The network for connecting field devices such as sensors, actuators, field controllers such as PLCs, regulators, drive controller etc., and man-machine interfaces is commonly called fieldbus. Since the motion transmission is now task of the communication system, and not more of kinematic chains as in the past, the communication protocol must assure that the desired profiles, and their properties, are correctly transmitted to the axes then reproduced or else the synchronization among the different parts is lost with all the resulting consequences. In this thesis, the problem of trajectory reconstruction in the case of an event-triggered communication system is faced. The most important feature that a real-time communication system must have is the preservation of the following temporal and spatial properties: absolute temporal consistency, relative temporal consistency, spatial consistency. Starting from the basic system composed by one master and one slave and passing through systems made up by many slaves and one master or many masters and one slave, the problems in the profile reconstruction and temporal properties preservation, and subsequently the synchronization of different profiles in network adopting an event-triggered communication system, have been shown. These networks are characterized by the fact that a common knowledge of the global time is not available. Therefore they are non-deterministic networks. Each topology is analyzed and the proposed solution based on phase-locked loops adopted for the basic master-slave case has been improved to face with the other configurations.
Resumo:
The objective of this dissertation is to develop and test a predictive model for the passive kinematics of human joints based on the energy minimization principle. To pursue this goal, the tibio-talar joint is chosen as a reference joint, for the reduced number of bones involved and its simplicity, if compared with other sinovial joints such as the knee or the wrist. Starting from the knowledge of the articular surface shapes, the spatial trajectory of passive motion is obtained as the envelop of joint configurations that maximize the surfaces congruence. An increase in joint congruence corresponds to an improved capability of distributing an applied load, allowing the joint to attain a better strength with less material. Thus, joint congruence maximization is a simple geometric way to capture the idea of joint energy minimization. The results obtained are validated against in vitro measured trajectories. Preliminary comparison provide strong support for the predictions of the theoretical model.
Resumo:
In recent years, new precision experiments have become possible withthe high luminosity accelerator facilities at MAMIand JLab, supplyingphysicists with precision data sets for different hadronic reactions inthe intermediate energy region, such as pion photo- andelectroproduction and real and virtual Compton scattering.By means of the low energy theorem (LET), the global properties of thenucleon (its mass, charge, and magnetic moment) can be separated fromthe effects of the internal structure of the nucleon, which areeffectively described by polarizabilities. Thepolarizabilities quantify the deformation of the charge andmagnetization densities inside the nucleon in an applied quasistaticelectromagnetic field. The present work is dedicated to develop atool for theextraction of the polarizabilities from these precise Compton data withminimum model dependence, making use of the detailed knowledge of pionphotoproduction by means of dispersion relations (DR). Due to thepresence of t-channel poles, the dispersion integrals for two ofthe six Compton amplitudes diverge. Therefore, we have suggested to subtract the s-channel dispersion integrals at zero photon energy($nu=0$). The subtraction functions at $nu=0$ are calculated through DRin the momentum transfer t at fixed $nu=0$, subtracted at t=0. For this calculation, we use the information about the t-channel process, $gammagammatopipito Nbar{N}$. In this way, four of thepolarizabilities can be predicted using the unsubtracted DR in the $s$-channel. The other two, $alpha-beta$ and $gamma_pi$, are free parameters in ourformalism and can be obtained from a fit to the Compton data.We present the results for unpolarized and polarized RCS observables,%in the kinematics of the most recent experiments, and indicate anenhanced sensitivity to the nucleon polarizabilities in theenergy range between pion production threshold and the $Delta(1232)$-resonance.newlineindentFurthermore,we extend the DR formalism to virtual Compton scattering (radiativeelectron scattering off the nucleon), in which the concept of thepolarizabilities is generalized to the case of avirtual initial photon by introducing six generalizedpolarizabilities (GPs). Our formalism provides predictions for the fourspin GPs, while the two scalar GPs $alpha(Q^2)$ and $beta(Q^2)$ have to befitted to the experimental data at each value of $Q^2$.We show that at energies betweenpion threshold and the $Delta(1232)$-resonance position, thesensitivity to the GPs can be increased significantly, as compared tolow energies, where the LEX is applicable. Our DR formalism can be used for analysing VCS experiments over a widerange of energy and virtuality $Q^2$, which allows one to extract theGPs from VCS data in different kinematics with a minimum of model dependence.
Resumo:
Biosensors find wide application in clinical diagnostics, bioprocess control and environmental monitoring. They should not only show high specificity and reproducibility but also a high sensitivity and stability of the signal. Therefore, I introduce a novel sensor technology based on plasmonic nanoparticles which overcomes both of these limitations. Plasmonic nanoparticles exhibit strong absorption and scattering in the visible and near-infrared spectral range. The plasmon resonance, the collective coherent oscillation mode of the conduction band electrons against the positively charged ionic lattice, is sensitive to the local environment of the particle. I monitor these changes in the resonance wavelength by a new dark-field spectroscopy technique. Due to a strong light source and a highly sensitive detector a temporal resolution in the microsecond regime is possible in combination with a high spectral stability. This opens a window to investigate dynamics on the molecular level and to gain knowledge about fundamental biological processes.rnFirst, I investigate adsorption at the non-equilibrium as well as at the equilibrium state. I show the temporal evolution of single adsorption events of fibrinogen on the surface of the sensor on a millisecond timescale. Fibrinogen is a blood plasma protein with a unique shape that plays a central role in blood coagulation and is always involved in cell-biomaterial interactions. Further, I monitor equilibrium coverage fluctuations of sodium dodecyl sulfate and demonstrate a new approach to quantify the characteristic rate constants which is independent of mass transfer interference and long term drifts of the measured signal. This method has been investigated theoretically by Monte-Carlo simulations but so far there has been no sensor technology with a sufficient signal-to-noise ratio.rnSecond, I apply plasmonic nanoparticles as sensors for the determination of diffusion coefficients. Thereby, the sensing volume of a single, immobilized nanorod is used as detection volume. When a diffusing particle enters the detection volume a shift in the resonance wavelength is introduced. As no labeling of the analyte is necessary the hydrodynamic radius and thus the diffusion properties are not altered and can be studied in their natural form. In comparison to the conventional Fluorescence Correlation Spectroscopy technique a volume reduction by a factor of 5000-10000 is reached.
Resumo:
Smoke spikes occurring during transient engine operation have detrimental health effects and increase fuel consumption by requiring more frequent regeneration of the diesel particulate filter. This paper proposes a decision tree approach to real-time detection of smoke spikes for control and on-board diagnostics purposes. A contemporary, electronically controlled heavy-duty diesel engine was used to investigate the deficiencies of smoke control based on the fuel-to-oxygen-ratio limit. With the aid of transient and steady state data analysis and empirical as well as dimensional modeling, it was shown that the fuel-to-oxygen ratio was not estimated correctly during the turbocharger lag period. This inaccuracy was attributed to the large manifold pressure ratios and low exhaust gas recirculation flows recorded during the turbocharger lag period, which meant that engine control module correlations for the exhaust gas recirculation flow and the volumetric efficiency had to be extrapolated. The engine control module correlations were based on steady state data and it was shown that, unless the turbocharger efficiency is artificially reduced, the large manifold pressure ratios observed during the turbocharger lag period cannot be achieved at steady state. Additionally, the cylinder-to-cylinder variation during this period were shown to be sufficiently significant to make the average fuel-to-oxygen ratio a poor predictor of the transient smoke emissions. The steady state data also showed higher smoke emissions with higher exhaust gas recirculation fractions at constant fuel-to-oxygen-ratio levels. This suggests that, even if the fuel-to-oxygen ratios were to be estimated accurately for each cylinder, they would still be ineffective as smoke limiters. A decision tree trained on snap throttle data and pruned with engineering knowledge was able to use the inaccurate engine control module estimates of the fuel-to-oxygen ratio together with information on the engine control module estimate of the exhaust gas recirculation fraction, the engine speed, and the manifold pressure ratio to predict 94% of all spikes occurring over the Federal Test Procedure cycle. The advantages of this non-parametric approach over other commonly used parametric empirical methods such as regression were described. An application of accurate smoke spike detection in which the injection pressure is increased at points with a high opacity to reduce the cumulative particulate matter emissions substantially with a minimum increase in the cumulative nitrogrn oxide emissions was illustrated with dimensional and empirical modeling.
Resumo:
Fever in neutropenia is the most frequent potentially life-threatening complication of chemotherapy in children and adolescents with cancer. This review summarizes recent studies that refine our knowledge of how to manage pediatric fever in neutropenia, and their implications for clinical practice and research.
Resumo:
Osteoarthritis (OA) is the most common form of joint disease and the leading cause of pain and physical disability in older people. Risk factors for incidence and progression of osteoarthritis vary considerably according to the type of joint. Disease assessment is difficult and the relationship between the radiographic severity of joint damage and the incidence and severity of pain is only modest. Psychosocial and socio-economic factors play an important role. This chapter will discuss four main guiding principles to the management of OA: (1) to avoid overtreating people with mild symptoms; (2) to attempt to avoid doing more harm than good ('primum non nocere'); (3) to base patient management on the severity of pain, disability and distress, and not on the severity of joint damage or radiographic change; and (4) to start with advice about simple measures that patients can take to help themselves, and only progress to interventions that require supervision or specialist knowledge if simple measures fail. Effect sizes derived from meta-analyses of large randomized trials in OA are only small to moderate for most therapeutic interventions, but they are still valuable for patients and clinically relevant for physicians. Joint replacement may be the only option with a large effect size, but is only appropriate for the relatively small number of people with OA who have advanced disease and severe symptoms. The key to successful management involves patient and health professionals working together to develop optimal treatment strategies for the individual.
Resumo:
There is nothing new or original in stating that the global economy directly impacts the profession of technical communicators. The globalization of the workplace requires that technical communicators be prepared to work in increasingly linguistically and culturally diverse contexts. These new exigencies have natural repercussions on the research and educational practices of the field In this work, I draw on rhetoric, linguistics, and literacy theory to explore the definition, role and meaning of the global context for the disciplinary construction of professional and technical communication. By adopting an interdisciplinary and diachronic perspective, I assert that the global context is a heuristic means for sophisticating the disciplinary identity of the field and for reinforcing its place within the humanities. Consequently, I contend that the globalization of the workplace is a kairotic moment for underscoring the rhetorical dimension of professional and technical communication.
Resumo:
A multitude of products, systems, approaches, views and notions characterize the field of e-learning. This article attempts to disentangle the field by using economic and sociological theories, theories of marketing management and strategy as well as practical experience gained by the author while working with leading edge suppliers of e-learning. On this basis, a distinction between knowledge creation e-learning and knowledge transfer e-learning is made. The various views are divided into four different ideal-typical paradigms, each with its own characteristics and limitations. Selecting the right paradigm to use in the development of an e-learning strategy may prove crucial to success. Implications for the development of an e-learning strategy in businesses and educational institutions are outlined.
Resumo:
For the first time in Switzerland, specifically trained livestock owners were included in a national disease surveillance program by the Federal Veterinary Office. A questionnaire on data about clinical and epidemiological aspects of Bluetongue Disease (BT) as well as on herd management was completed by 26 sheep owners three months after they had attended a training course about BT. The control group, consisted of 264 randomly selected sheep and cattle owners who had not visited a training course. Results showed that disease awareness for BT after attending the training course was considerably increased. This was especially evident in the better knowledge of the participants about the great number of possible symptoms. Training courses with the objective of increased disease awareness of livestock owners are an efficient, cost-effective instrument in control programs for exotic diseases.
Resumo:
This paper assesses possible contributions of land change science to the growing body of knowledge about large-scale land acquisition. Despite obvious commonalities, such as a problem-oriented and interdisciplinary approach to land change, there seems to be little overlap between the two fields thus far. We adopt a sustainability research perspective — an important feature of land change science — to review research questions about large-scale land acquisition that are currently being addressed, and to define questions for further inquiry. Possible contributions of land change science toward more sustainable land investments are based on understanding land use change not only as a consequence, but also as a cause of large-scale land acquisition and as a solution to the problems land acquisition can create.