905 resultados para Development of large software systems,
Resumo:
Premise of the study: Microsatellite primers were developed for Jatropha curcas (Euphorbiaceae), a tree species with large potential for biofuel production, to investigate its natural genetic diversity and mating system to facilitate the establishment of tree improvement and conservation programs. Methods and Results: Using a protocol for genomic library enrichment, 104 clones containing 195 repeat motifs were identified. Primer pairs were developed for 40 microsatellite loci and validated in 41 accessions of J. curcas from six provenances. Nine loci were polymorphic revealing from two to eight alleles per locus, and six primers were able to amplify alleles in the congeners J. podagrica, J. pohliana, and J. gossypifolia, but not in other Euphorbiaceae species, such as Hevea brasiliensis, Manihot esculenta, or Ricinus communis. Conclusions: The primers developed here revealed polymorphic loci that are suitable for genetic diversity and structure, mating system, and gene flow studies in J. curcas, and some congeners.
Resumo:
Abstract Background Several mathematical and statistical methods have been proposed in the last few years to analyze microarray data. Most of those methods involve complicated formulas, and software implementations that require advanced computer programming skills. Researchers from other areas may experience difficulties when they attempting to use those methods in their research. Here we present an user-friendly toolbox which allows large-scale gene expression analysis to be carried out by biomedical researchers with limited programming skills. Results Here, we introduce an user-friendly toolbox called GEDI (Gene Expression Data Interpreter), an extensible, open-source, and freely-available tool that we believe will be useful to a wide range of laboratories, and to researchers with no background in Mathematics and Computer Science, allowing them to analyze their own data by applying both classical and advanced approaches developed and recently published by Fujita et al. Conclusion GEDI is an integrated user-friendly viewer that combines the state of the art SVR, DVAR and SVAR algorithms, previously developed by us. It facilitates the application of SVR, DVAR and SVAR, further than the mathematical formulas present in the corresponding publications, and allows one to better understand the results by means of available visualizations. Both running the statistical methods and visualizing the results are carried out within the graphical user interface, rendering these algorithms accessible to the broad community of researchers in Molecular Biology.
Resumo:
The great challenges for researchers working in the field of vaccinology are optimizing DNA vaccines for use in humans or large animals and creating effective single-dose vaccines using appropriated controlled delivery systems. Plasmid DNA encoding the heat-shock protein 65 (hsp65) (DNAhsp65) has been shown to induce protective and therapeutic immune responses in a murine model of tuberculosis (TB). Despite the success of naked DNAhsp65-based vaccine to protect mice against TB, it requires multiple doses of high amounts of DNA for effective immunization. In order to optimize this DNA vaccine and simplify the vaccination schedule, we coencapsulated DNAhsp65 and the adjuvant trehalose dimycolate (TDM) into biodegradable poly (DL-lactide-co-glycolide) (PLGA) microspheres for a single dose administration. Moreover, a single-shot prime-boost vaccine formulation based on a mixture of two different PLGA microspheres, presenting faster and slower release of, respectively, DNAhsp65 and the recombinant hsp65 protein was also developed. These formulations were tested in mice as well as in guinea pigs by comparison with the efficacy and toxicity induced by the naked DNA preparation or BCG. The single-shot prime-boost formulation clearly presented good efficacy and diminished lung pathology in both mice and guinea pigs.
Development of nanoinjector devices for electrospray ionization - tandem mass spectrometry (ESI-MSn)
Resumo:
In mass spectrometric (MS) systems with electrospray ionization (ESI), the sample can be analyzed coupled to separation systems (such as liquid chromatography or capillary electrophoresis) or simply by direct infusion. The greatest benefit of the type of injection is the possibility of continuous use of small amounts of samples over a long period of time. This extended analysis time allows a complete study of fragmentation by mass spectrometry, which is critical for structure elucidation of new compounds, or when using an ion trap mass analyzer. The injector filled with the sample is placed at the ESI source inlet creating an electric field suitable for the continuous formation of a spray (solvent and sample) and consequently, the gradual and even release of the sample. For the formation of the spray, is necessary that the injector end is metalized. The formation of a bilayer of titanium and gold provided an excellent attachment of the film, resulting in a nanoinjector for ionization/spray formation in the system for MS. The nanoinjectors showed high repeatability and stability over 100 min by continuous sampling with 10 µL of sample.
Resumo:
The aim of this Account is to provide an overview of our current research activities on the design and modification of superparamagnetic nanomaterials for application in the field of magnetic separation and catalysis. First, an introduction of magnetism and magnetic separation is done. Then, the synthetic strategies that have been developed for generating superparamagnetic nanoparticles spherically coated by silica and other oxides, with a focus on well characterized systems prepared by methods that generate samples of high quality and easy to scale-up, are discussed. A set of magnetically recoverable catalysts prepared in our research group by the unique combination of superparamagnetic supports and metal nanoparticles is highlighted. This Account is concluded with personal remarks and perspectives on this research field.
Resumo:
Micelles composed of amphiphilic copolymers linked to a radioactive element are used in nuclear medicine predominantly as a diagnostic application. A relevant advantage of polymeric micelles in aqueous solution is their resulting particle size, which can vary from 10 to 100 nm in diameter. In this review, polymeric micelles labeled with radioisotopes including technetium (99mTc) and indium (111In), and their clinical applications for several diagnostic techniques, such as single photon emission computed tomography (SPECT), gamma-scintigraphy, and nuclear magnetic resonance (NMR), were discussed. Also, micelle use primarily for the diagnosis of lymphatic ducts and sentinel lymph nodes received special attention. Notably, the employment of these diagnostic techniques can be considered a significant tool for functionally exploring body systems as well as investigating molecular pathways involved in the disease process. The use of molecular modeling methodologies and computer-aided drug design strategies can also yield valuable information for the rational design and development of novel radiopharmaceuticals.
Resumo:
Supply chain starts with a demand arisen and ends with material transport and delivery at its final destination. With this in mind, most of manufacturing, processors or distribution companies of consumer goods, spare parts and components for production, processing and finished goods, within national or international markets, may not have information and control over its supply chain performance. This article presents concept and logistics models evolution, purchase order and international supplier management, control tower and its logistics information systems. This also presents a real process implementation for a global high tech manufacturer company.
Resumo:
Small scale fluid flow systems have been studied for various applications, such as chemical reagent dosages and cooling devices of compact electronic components. This work proposes to present the complete cycle development of an optimized heat sink designed by using Topology Optimization Method (TOM) for best performance, including minimization of pressure drop in fluid flow and maximization of heat dissipation effects, aiming small scale applications. The TOM is applied to a domain, to obtain an optimized channel topology, according to a given multi-objective function that combines pressure drop minimization and heat transfer maximization. Stokes flow hypothesis is adopted. Moreover, both conduction and forced convection effects are included in the steady-state heat transfer model. The topology optimization procedure combines the Finite Element Method (to carry out the physical analysis) with Sequential Linear Programming (as the optimization algorithm). Two-dimensional topology optimization results of channel layouts obtained for a heat sink design are presented as example to illustrate the design methodology. 3D computational simulations and prototype manufacturing have been carried out to validate the proposed design methodology.
Resumo:
In this paper, nighttime light data are suggested as a proxy for spatial distribution of vehicles running in urban and nearby areas. Nighttime lights focus on human activities, in contrast to traditional Earth observing systems that focus on natural systems. It is the human activity being visible in the form of brightness of nocturnal lights. Two available nighttime lights dataset were used in this work. The first one was provided by the U.S. Air Force Defense Meteorological Satellite Program (DMSP) Operational Linescan System (OLS), henceforth, DMSO-OLS. The second one is the NASA-NOAA Suomi National Polar-orbiting Polar-orbiting Partnership (NPP) satellite, henceforth, Suomi-NPP. To validate the new proposed methodology, hundreds of urban areas of South America were analyzed in a high degree of resolution. The results of this study showed that night-time lights are very well correlated with vehicle fleet, population, and impervious surfaces but with strong spatial variability. The results of this study suggest a better understanding of the human activities in the context of a vehicular-based city conception.
Resumo:
Trabajo realizado por: Garijo, J. C., Hernández León, S.
Resumo:
[EN] Main deformities such as lordosis, opercular deformities and upper/lower jaws shortening are considered as quality descriptors in commercial marine fish fry production and seem to be related at least with larval culture conditions in early larval stages. The aim of this work was to obtain information about the contribution of the diet and rearing system to the apparition of these abnormalities in gilthead sea bream (Sparus aurata) larvae in semi-industrial scale facilities. For that purpose, two different larval rearing systems semi-intensive and intensive were compared by duplicate and with the same live feed enrichments; besides, two different rotifer enrichments were tested in an intensive system. Biochemical composition of larvae, preys and commercial products was analysed. At 50 days post hatching six hundred fish per treatment were individually studied under stereoscope and abnormalities frequency recorded. At 95 days post hatching fry were soft X ray monitored as well. Survival and malformation frequency were significantly different between treatments, the effect of diet and system are discussed. A significantly lower percentage of deformity rates together with better survival and growth were obtained in the semi-intensive system, whereas the rotifer enrichment significantly affected larval survival.
Resumo:
Background. The surgical treatment of dysfunctional hips is a severe condition for the patient and a costly therapy for the public health. Hip resurfacing techniques seem to hold the promise of various advantages over traditional THR, with particular attention to young and active patients. Although the lesson provided in the past by many branches of engineering is that success in designing competitive products can be achieved only by predicting the possible scenario of failure, to date the understanding of the implant quality is poorly pre-clinically addressed. Thus revision is the only delayed and reliable end point for assessment. The aim of the present work was to model the musculoskeletal system so as to develop a protocol for predicting failure of hip resurfacing prosthesis. Methods. Preliminary studies validated the technique for the generation of subject specific finite element (FE) models of long bones from Computed Thomography data. The proposed protocol consisted in the numerical analysis of the prosthesis biomechanics by deterministic and statistic studies so as to assess the risk of biomechanical failure on the different operative conditions the implant might face in a population of interest during various activities of daily living. Physiological conditions were defined including the variability of the anatomy, bone densitometry, surgery uncertainties and published boundary conditions at the hip. The protocol was tested by analysing a successful design on the market and a new prototype of a resurfacing prosthesis. Results. The intrinsic accuracy of models on bone stress predictions (RMSE < 10%) was aligned to the current state of the art in this field. The accuracy of prediction on the bone-prosthesis contact mechanics was also excellent (< 0.001 mm). The sensitivity of models prediction to uncertainties on modelling parameter was found below 8.4%. The analysis of the successful design resulted in a very good agreement with published retrospective studies. The geometry optimisation of the new prototype lead to a final design with a low risk of failure. The statistical analysis confirmed the minimal risk of the optimised design over the entire population of interest. The performances of the optimised design showed a significant improvement with respect to the first prototype (+35%). Limitations. On the authors opinion the major limitation of this study is on boundary conditions. The muscular forces and the hip joint reaction were derived from the few data available in the literature, which can be considered significant but hardly representative of the entire variability of boundary conditions the implant might face over the patients population. This moved the focus of the research on modelling the musculoskeletal system; the ongoing activity is to develop subject-specific musculoskeletal models of the lower limb from medical images. Conclusions. The developed protocol was able to accurately predict known clinical outcomes when applied to a well-established device and, to support the design optimisation phase providing important information on critical characteristics of the patients when applied to a new prosthesis. The presented approach does have a relevant generality that would allow the extension of the protocol to a large set of orthopaedic scenarios with minor changes. Hence, a failure mode analysis criterion can be considered a suitable tool in developing new orthopaedic devices.
Resumo:
Since the birth of the European Union on 1957, the development of a single market through the integration of national freight transport networks has been one of the most important points in the European Union agenda. Increasingly congested motorways, rising oil prices and concerns about environment and climate change require the optimization of transport systems and transport processes. The best solution should be the intermodal transport, in which the most efficient transport options are used for the different legs of transport. This thesis examines the problem of defining innovative strategies and procedures for the sustainable development of intermodal freight transport in Europe. In particular, the role of maritime transport and railway transport in the intermodal chain are examined in depth, as these modes are recognized to be environmentally friendly and energy efficient. Maritime transport is the only mode that has kept pace with the fast growth in road transport, but it is necessary to promote the full exploitation of it by involving short sea shipping as an integrated service in the intermodal door-to-door supply chain and by improving port accessibility. The role of Motorways of the Sea services as part of the Trans-European Transport Network is is taken into account: a picture of the European policy and a state of the art of the Italian Motorways of the Sea system are reported. Afterwards, the focus shifts from line to node problems: the role of intermodal railway terminals in the transport chain is discussed. In particular, the last mile process is taken into account, as it is crucial in order to exploit the full capacity of an intermodal terminal. The difference between the present last mile planning models of Bologna Interporto and Verona Quadrante Europa is described and discussed. Finally, a new approach to railway intermodal terminal planning and management is introduced, by describing the case of "Terminal Gate" at Verona Quadrante Europa. Some proposals to favour the integrate management of "Terminal Gate" and the allocation of its capacity are drawn up.
Resumo:
Great strides have been made in the last few years in the pharmacological treatment of neuropsychiatric disorders, with the introduction into the therapy of several new and more efficient agents, which have improved the quality of life of many patients. Despite these advances, a large percentage of patients is still considered “non-responder” to the therapy, not drawing any benefits from it. Moreover, these patients have a peculiar therapeutic profile, due to the very frequent application of polypharmacy, attempting to obtain satisfactory remission of the multiple aspects of psychiatric syndromes. Therapy is heavily individualised and switching from one therapeutic agent to another is quite frequent. One of the main problems of this situation is the possibility of unwanted or unexpected pharmacological interactions, which can occur both during polypharmacy and during switching. Simultaneous administration of psychiatric drugs can easily lead to interactions if one of the administered compounds influences the metabolism of the others. Impaired CYP450 function due to inhibition of the enzyme is frequent. Other metabolic pathways, such as glucuronidation, can also be influenced. The Therapeutic Drug Monitoring (TDM) of psychotropic drugs is an important tool for treatment personalisation and optimisation. It deals with the determination of parent drugs and metabolites plasma levels, in order to monitor them over time and to compare these findings with clinical data. This allows establishing chemical-clinical correlations (such as those between administered dose and therapeutic and side effects), which are essential to obtain the maximum therapeutic efficacy, while minimising side and toxic effects. It is evident the importance of developing sensitive and selective analytical methods for the determination of the administered drugs and their main metabolites, in order to obtain reliable data that can correctly support clinical decisions. During the three years of Ph.D. program, some analytical methods based on HPLC have been developed, validated and successfully applied to the TDM of psychiatric patients undergoing treatment with drugs belonging to following classes: antipsychotics, antidepressants and anxiolytic-hypnotics. The biological matrices which have been processed were: blood, plasma, serum, saliva, urine, hair and rat brain. Among antipsychotics, both atypical and classical agents have been considered, such as haloperidol, chlorpromazine, clotiapine, loxapine, risperidone (and 9-hydroxyrisperidone), clozapine (as well as N-desmethylclozapine and clozapine N-oxide) and quetiapine. While the need for an accurate TDM of schizophrenic patients is being increasingly recognized by psychiatrists, only in the last few years the same attention is being paid to the TDM of depressed patients. This is leading to the acknowledgment that depression pharmacotherapy can greatly benefit from the accurate application of TDM. For this reason, the research activity has also been focused on first and second-generation antidepressant agents, like triciclic antidepressants, trazodone and m-chlorophenylpiperazine (m-cpp), paroxetine and its three main metabolites, venlafaxine and its active metabolite, and the most recent antidepressant introduced into the market, duloxetine. Among anxiolytics-hypnotics, benzodiazepines are very often involved in the pharmacotherapy of depression for the relief of anxious components; for this reason, it is useful to monitor these drugs, especially in cases of polypharmacy. The results obtained during these three years of Ph.D. program are reliable and the developed HPLC methods are suitable for the qualitative and quantitative determination of CNS drugs in biological fluids for TDM purposes.
Resumo:
The purpose of this Thesis is to develop a robust and powerful method to classify galaxies from large surveys, in order to establish and confirm the connections between the principal observational parameters of the galaxies (spectral features, colours, morphological indices), and help unveil the evolution of these parameters from $z \sim 1$ to the local Universe. Within the framework of zCOSMOS-bright survey, and making use of its large database of objects ($\sim 10\,000$ galaxies in the redshift range $0 < z \lesssim 1.2$) and its great reliability in redshift and spectral properties determinations, first we adopt and extend the \emph{classification cube method}, as developed by Mignoli et al. (2009), to exploit the bimodal properties of galaxies (spectral, photometric and morphologic) separately, and then combining together these three subclassifications. We use this classification method as a test for a newly devised statistical classification, based on Principal Component Analysis and Unsupervised Fuzzy Partition clustering method (PCA+UFP), which is able to define the galaxy population exploiting their natural global bimodality, considering simultaneously up to 8 different properties. The PCA+UFP analysis is a very powerful and robust tool to probe the nature and the evolution of galaxies in a survey. It allows to define with less uncertainties the classification of galaxies, adding the flexibility to be adapted to different parameters: being a fuzzy classification it avoids the problems due to a hard classification, such as the classification cube presented in the first part of the article. The PCA+UFP method can be easily applied to different datasets: it does not rely on the nature of the data and for this reason it can be successfully employed with others observables (magnitudes, colours) or derived properties (masses, luminosities, SFRs, etc.). The agreement between the two classification cluster definitions is very high. ``Early'' and ``late'' type galaxies are well defined by the spectral, photometric and morphological properties, both considering them in a separate way and then combining the classifications (classification cube) and treating them as a whole (PCA+UFP cluster analysis). Differences arise in the definition of outliers: the classification cube is much more sensitive to single measurement errors or misclassifications in one property than the PCA+UFP cluster analysis, in which errors are ``averaged out'' during the process. This method allowed us to behold the \emph{downsizing} effect taking place in the PC spaces: the migration between the blue cloud towards the red clump happens at higher redshifts for galaxies of larger mass. The determination of $M_{\mathrm{cross}}$ the transition mass is in significant agreement with others values in literature.