913 resultados para automated warehouse
Resumo:
The proposal to work on this final project came after several discussions held with Dr. Elzbieta Malinowski Gadja, who in 2008 published the book entitled Advanced Data Warehouse Design: From Conventional to Spatial and Temporal Applications (Data-Centric Systems and Applications). The project was carried out under the technical supervision of Dr. Malinowski and the direct beneficiary was the University of Costa Rica (UCR) where Dr. Malinowski is a professor at the Department of Computer Science and Informatics. The purpose of this project was twofold: First, to translate chapter III of said book with the intention of generating educational material for the use of the UCR and, second, to venture in the field of technical translation related to data warehouse. For the first component, the goal was to generate a final product that would eventually serve as an educational tool for the post-graduate courses of the UCR. For the second component, this project allowed me to acquire new skills and put into practice techniques that have helped me not only to perfom better in my current job as an Assistant Translator of the Inter-American BAnk (IDB), but also to use them in similar projects. The process was lenggthy and required torough research and constant communication with the author. The investigation focused on the search of terms and definitions to prepare the glossary, which was the basis to start the translation project. The translation process itself was carried out by phases, so that comments and corrections by the author could be taken into account in subsequent stages. Later, based on the glossary and the translated text, illustrations had been created in the Visio software were translated. In addition to the technical revision by the author, professor Carme Mangiron was in charge of revising the non-technical text. The result was a high-quality document that is currently used as reference and study material by the Department of Computer Science and Informatics of Costa Rica.
Resumo:
A avaliação de terras é o processo que permite estimar o uso potencial da terra com base em seus atributos. Grande variedade de modelos analíticos pode ser usada neste processo. No Brasil, os dois sistemas de avaliação das terras mais utilizados são o Sistema de Classificação da Capacidade de Uso da Terra e o Sistema FAO/Brasileiro de Aptidão Agrícola das Terras. Embora difiram em vários aspectos, ambos exigem o cruzamento de inúmeras variáveis ambientais. O ALES (Automated Land Evaluation System) é um programa de computador que permite construir sistemas especialistas para avaliação de terras. As entidades avaliadas pelo ALES são as unidades de mapeamento, as quais podem ser de caráter generalizado ou detalhado. A área objeto desta avaliação é composta pelas microrregiões de Chapecó e Xanxerê, no Oeste catarinense, e engloba 54 municípios. Os dados sobre os solos e sobre as características da paisagem foram obtidos no levantamento de reconhecimento dos solos do Estado, na escala de 1:250.000. O presente estudo desenvolveu o sistema especialista ATOSC (Avaliação das Terras do Oeste de Santa Catarina) e, na sua construção, incluiu-se a definição dos requerimentos dos tipos de utilização da terra, bem como foi feita a subseqüente comparação destes com os atributos de cada unidade de mapeamento. Os tipos de utilização da terra considerados foram: feijão, milho, soja e trigo, em cultivos solteiros, sob condições de sequeiro e de manejo característicos destas culturas no Estado. As informações sobre os recursos naturais compreendem os atributos climáticos, de solos e das condições da paisagem que interferem na produção destas culturas. Para cada tipo de utilização da terra foram especificados, no ATOSC, o código, o nome e seus respectivos requerimentos de uso da terra. Os requerimentos de cada cultura foram definidos por uma combinação específica das características das terras selecionadas, que determina o nível de severidade de cada um deles em relação à cultura. Estabeleceram-se quatro níveis de severidade que indicam aumento do grau de limitação ou diminuição do potencial para determinado tipo de uso da terra, a saber: limitação nula ou ligeira (favorável); limitação moderada (moderadamente favorável), limitação forte (pouco favorável); e limitação muito forte (desfavorável). Na árvore de decisão, componente básico do sistema especialista, são implementadas as regras que permitirão o enquadramento das terras em classes de adequação definidas, baseado na qualidade dos requerimentos de acordo com o tipo de uso. O ATOSC facilitou o processo de comparação entre as características das terras das microrregiões de Chapecó e Xanxerê e os requerimentos de uso considerados, por permitir efetuar automaticamente a avaliação das terras, reduzindo, assim, o tempo gasto neste processo. As terras das microrregiões de Chapecó e Xanxerê foram enquadradas, em sua maior parte, nas classes de adequação pouco favorável (3) e desfavorável (4) para os cultivos considerados. Os principais fatores limitantes identificados nestas microrregiões foram a fertilidade natural e o risco de erosão, para o feijão e o milho, e condições de mecanização e risco de erosão, para a soja e o trigo.
Resumo:
Chlamydia serology is indicated to investigate etiology of miscarriage, infertility, pelvic inflammatory disease, and ectopic pregnancy. Here, we assessed the reliability of a new automated-multiplex immunofluorescence assay (InoDiag test) to detect specific anti-C. trachomatis immunoglobulin G. Considering immunofluorescence assay (IF) as gold standard, InoDiag tests exhibited similar sensitivities (65.5%) but better specificities (95.1%-98%) than enzyme-linked immunosorbent assays (ELISAs). InoDiag tests demonstrated similar or lower cross-reactivity rates when compared to ELISA or IF.
Resumo:
OBJECTIVE: The estimation of blood pressure is dependent on the accuracy of the measurement devices. We compared blood pressure readings obtained with an automated oscillometric arm-cuff device and with an automated oscillometric wrist-cuff device and then assessed the prevalence of defined blood pressure categories. METHODS: Within a population-based survey in Dar es Salaam (Tanzania), we selected all participants with a blood pressure >/= 160/95 mmHg (n=653) and a random sample of participants with blood pressure <160/95 mmHg (n=662), based on the first blood pressure reading. Blood pressure was reassessed 2 years later for 464 and 410 of the participants, respectively. In these 874 subjects, we compared the prevalence of blood pressure categories as estimated with each device. RESULTS: Overall, the wrist device gave higher blood pressure readings than the arm device (difference in systolic/diastolic blood pressure: 6.3 +/- 17.3/3.7 +/- 11.8 mmHg, P<0.001). However, the arm device tended to give lower readings than the wrist device for high blood pressure values. The prevalence of blood pressure categories differed substantially depending on which device was used, 29% and 14% for blood pressure <120/80 mmHg (arm device versus wrist device, respectively), 30% and 33% for blood pressure 120-139/80-89 mmHg, 17% and 26% for blood pressure 140-159/90-99 mmHg, 12% and 13% for blood pressure 160-179/100-109 mmHg and 13% and 14% for blood pressure >/= 180/110 mmHg. CONCLUSIONS: A large discrepancy in the estimated prevalence of blood pressure categories was observed using two different automatic measurement devices. This emphasizes that prevalence estimates based on automatic devices should be considered with caution.
Resumo:
The aim of this study was to evaluate the forensic protocol recently developed by Qiagen for the QIAsymphony automated DNA extraction platform. Samples containing low amounts of DNA were specifically considered, since they represent the majority of samples processed in our laboratory. The analysis of simulated blood and saliva traces showed that the highest DNA yields were obtained with the maximal elution volume available for the forensic protocol, that is 200 ml. Resulting DNA extracts were too diluted for successful DNA profiling and required a concentration. This additional step is time consuming and potentially increases inversion and contamination risks. The 200 ml DNA extracts were concentrated to 25 ml, and the DNA recovery estimated with real-time PCR as well as with the percentage of SGM Plus alleles detected. Results using our manual protocol, based on the QIAamp DNA mini kit, and the automated protocol were comparable. Further tests will be conducted to determine more precisely DNA recovery, contamination risk and PCR inhibitors removal, once a definitive procedure, allowing the concentration of DNA extracts from low yield samples, will be available for the QIAsymphony.
Resumo:
TCRep 3D is an automated systematic approach for TCR-peptide-MHC class I structure prediction, based on homology and ab initio modeling. It has been considerably generalized from former studies to be applicable to large repertoires of TCR. First, the location of the complementary determining regions of the target sequences are automatically identified by a sequence alignment strategy against a database of TCR Vα and Vβ chains. A structure-based alignment ensures automated identification of CDR3 loops. The CDR are then modeled in the environment of the complex, in an ab initio approach based on a simulated annealing protocol. During this step, dihedral restraints are applied to drive the CDR1 and CDR2 loops towards their canonical conformations, described by Al-Lazikani et. al. We developed a new automated algorithm that determines additional restraints to iteratively converge towards TCR conformations making frequent hydrogen bonds with the pMHC. We demonstrated that our approach outperforms popular scoring methods (Anolea, Dope and Modeller) in predicting relevant CDR conformations. Finally, this modeling approach has been successfully applied to experimentally determined sequences of TCR that recognize the NY-ESO-1 cancer testis antigen. This analysis revealed a mechanism of selection of TCR through the presence of a single conserved amino acid in all CDR3β sequences. The important structural modifications predicted in silico and the associated dramatic loss of experimental binding affinity upon mutation of this amino acid show the good correspondence between the predicted structures and their biological activities. To our knowledge, this is the first systematic approach that was developed for large TCR repertoire structural modeling.
Resumo:
We present a study on the development and the evaluation of a fully automated radio-frequency glow discharge system devoted to the deposition of amorphous thin film semiconductors and insulators. The following aspects were carefully addressed in the design of the reactor: (1) cross contamination by dopants and unstable gases, (2) capability of a fully automated operation, (3) precise control of the discharge parameters, particularly the substrate temperature, and (4) high chemical purity. The new reactor, named ARCAM, is a multiplasma-monochamber system consisting of three separated plasma chambers located inside the same isothermal vacuum vessel. Thus, the system benefits from the advantages of multichamber systems but keeps the simplicity and low cost of monochamber systems. The evaluation of the reactor performances showed that the oven-like structure combined with a differential dynamic pumping provides a high chemical purity in the deposition chamber. Moreover, the studies of the effects associated with the plasma recycling of material from the walls and of the thermal decomposition of diborane showed that the multiplasma-monochamber design is efficient for the production of abrupt interfaces in hydrogenated amorphous silicon (a-Si:H) based devices. Also, special attention was paid to the optimization of plasma conditions for the deposition of low density of states a-Si:H. Hence, we also present the results concerning the effects of the geometry, the substrate temperature, the radio frequency power and the silane pressure on the properties of the a-Si:H films. In particular, we found that a low density of states a-Si:H can be deposited at a wide range of substrate temperatures (100°C
Resumo:
A large percentage of bridges in the state of Iowa are classified as structurally or fiinctionally deficient. These bridges annually compete for a share of Iowa's limited transportation budget. To avoid an increase in the number of deficient bridges, the state of Iowa decided to implement a comprehensive Bridge Management System (BMS) and selected the Pontis BMS software as a bridge management tool. This program will be used to provide a selection of maintenance, repair, and replacement strategies for the bridge networks to achieve an efficient and possibly optimal allocation of resources. The Pontis BMS software uses a new rating system to evaluate extensive and detailed inspection data gathered for all bridge elements. To manually collect these data would be a highly time-consuming job. The objective of this work was to develop an automated-computerized methodology for an integrated data base that includes the rating conditions as defined in the Pontis program. Several of the available techniques that can be used to capture inspection data were reviewed, and the most suitable method was selected. To accomplish the objectives of this work, two userfriendly programs were developed. One program is used in the field to collect inspection data following a step-by-step procedure without the need to refer to the Pontis user's manuals. The other program is used in the office to read the inspection data and prepare input files for the Pontis BMS software. These two programs require users to have very limited knowledge of computers. On-line help screens as well as options for preparing, viewing, and printing inspection reports are also available. The developed data collection software will improve and expedite the process of conducting bridge inspections and preparing the required input files for the Pontis program. In addition, it will eliminate the need for large storage areas and will simplify retrieval of inspection data. Furthermore, the approach developed herein will facilitate transferring these captured data electronically between offices within the Iowa DOT and across the state.
Resumo:
OBJECTIVE: To evaluate an automated seizure detection (ASD) algorithm in EEGs with periodic and other challenging patterns. METHODS: Selected EEGs recorded in patients over 1year old were classified into four groups: A. Periodic lateralized epileptiform discharges (PLEDs) with intermixed electrical seizures. B. PLEDs without seizures. C. Electrical seizures and no PLEDs. D. No PLEDs or seizures. Recordings were analyzed by the Persyst P12 software, and compared to the raw EEG, interpreted by two experienced neurophysiologists; Positive percent agreement (PPA) and false-positive rates/hour (FPR) were calculated. RESULTS: We assessed 98 recordings (Group A=21 patients; B=29, C=17, D=31). Total duration was 82.7h (median: 1h); containing 268 seizures. The software detected 204 (=76.1%) seizures; all ictal events were captured in 29/38 (76.3%) patients; in only in 3 (7.7%) no seizures were detected. Median PPA was 100% (range 0-100; interquartile range 50-100), and the median FPR 0/h (range 0-75.8; interquartile range 0-4.5); however, lower performances were seen in the groups containing periodic discharges. CONCLUSION: This analysis provides data regarding the yield of the ASD in a particularly difficult subset of EEG recordings, showing that periodic discharges may bias the results. SIGNIFICANCE: Ongoing refinements in this technique might enhance its utility and lead to a more extensive application.
Resumo:
Amplified Fragment Length Polymorphisms (AFLPs) are a cheap and efficient protocol for generating large sets of genetic markers. This technique has become increasingly used during the last decade in various fields of biology, including population genomics, phylogeography, and genome mapping. Here, we present RawGeno, an R library dedicated to the automated scoring of AFLPs (i.e., the coding of electropherogram signals into ready-to-use datasets). Our program includes a complete suite of tools for binning, editing, visualizing, and exporting results obtained from AFLP experiments. RawGeno can either be used with command lines and program analysis routines or through a user-friendly graphical user interface. We describe the whole RawGeno pipeline along with recommendations for (a) setting the analysis of electropherograms in combination with PeakScanner, a program freely distributed by Applied Biosystems; (b) performing quality checks; (c) defining bins and proceeding to scoring; (d) filtering nonoptimal bins; and (e) exporting results in different formats.
Resumo:
This project developed an automatic conversion software tool that takes input a from an Iowa Department of Transportation (DOT) MicroStation three-dimensional (3D) design file and converts it into a form that can be used by the University of Iowa’s National Advanced Driving Simulator (NADS) MiniSim. Once imported into the simulator, the new roadway has the identical geometric design features as in the Iowa DOT design file. The base roadway appears as a wireframe in the simulator software. Through additional software tools, textures and shading can be applied to the roadway surface and surrounding terrain to produce the visual appearance of an actual road. This tool enables Iowa DOT engineers to work with the universities to create drivable versions of prospective roadway designs. By driving the designs in the simulator, problems can be identified early in the design process. The simulated drives can also be used for public outreach and human factors driving research.
Resumo:
The creation of three-dimensional (3D) drawings for proposed designs for construction, re-construction and rehabilitation activities are becoming increasingly common for highway designers, whether by department of transportation (DOT) employees or consulting engineers. However, technical challenges exist that prevent the use of these 3D drawings/models from being used as the basis of interactive simulation. Use of driving simulation to service the needs of the transportation industry in the US lags behind Europe due to several factors, including lack of technical infrastructure at DOTs, cost of maintaining and supporting simulation infrastructure—traditionally done by simulation domain experts—and cost and effort to translate DOT domain data into the simulation domain.