972 resultados para Automated sorting system
Resumo:
OCEANS, 2001. MTS/IEEE Conference and Exhibition (Volume:2 )
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
The processes of mobilization of land for infrastructures of public and private domain are developed according to proper legal frameworks and systematically confronted with the impoverished national situation as regards the cadastral identification and regularization, which leads to big inefficiencies, sometimes with very negative impact to the overall effectiveness. This project report describes Ferbritas Cadastre Information System (FBSIC) project and tools, which in conjunction with other applications, allow managing the entire life-cycle of Land Acquisition and Cadastre, including support to field activities with the integration of information collected in the field, the development of multi-criteria analysis information, monitoring all information in the exploration stage, and the automated generation of outputs. The benefits are evident at the level of operational efficiency, including tools that enable process integration and standardization of procedures, facilitate analysis and quality control and maximize performance in the acquisition, maintenance and management of registration information and expropriation (expropriation projects). Therefore, the implemented system achieves levels of robustness, comprehensiveness, openness, scalability and reliability suitable for a structural platform. The resultant solution, FBSIC, is a fit-for-purpose cadastre information system rooted in the field of railway infrastructures. FBSIC integrating nature of allows: to accomplish present needs and scale to meet future services; to collect, maintain, manage and share all information in one common platform, and transform it into knowledge; to relate with other platforms; to increase accuracy and productivity of business processes related with land property management.
Resumo:
Introduction. The genera Enterococcus, Staphylococcus and Streptococcus are recognized as important Gram-positive human pathogens. The aim of this study was to evaluate the performance of Vitek 2 in identifying Gram-positive cocci and their antimicrobial susceptibilities. Methods. One hundred four isolates were analyzed to determine the accuracy of the automated system for identifying the bacteria and their susceptibility to oxacillin and vancomycin. Results. The system correctly identified 77.9% and 97.1% of the isolates at the species and genus levels, respectively. Additionally, 81.8% of the Vitek 2 results agreed with the known antimicrobial susceptibility profiles. Conclusion. Vitek 2 correctly identified the commonly isolated strains; however, the limitations of the method may lead to ambiguous findings.
Resumo:
Earthworks involve the levelling or shaping of a target area through the moving or processing of the ground surface. Most construction projects require earthworks, which are heavily dependent on mechanical equipment (e.g., excavators, trucks and compactors). Often, earthworks are the most costly and time-consuming component of infrastructure constructions (e.g., road, railway and airports) and current pressure for higher productivity and safety highlights the need to optimize earthworks, which is a nontrivial task. Most previous attempts at tackling this problem focus on single-objective optimization of partial processes or aspects of earthworks, overlooking the advantages of a multi-objective and global optimization. This work describes a novel optimization system based on an evolutionary multi-objective approach, capable of globally optimizing several objectives simultaneously and dynamically. The proposed system views an earthwork construction as a production line, where the goal is to optimize resources under two crucial criteria (costs and duration) and focus the evolutionary search (non-dominated sorting genetic algorithm-II) on compaction allocation, using linear programming to distribute the remaining equipment (e.g., excavators). Several experiments were held using real-world data from a Portuguese construction site, showing that the proposed system is quite competitive when compared with current manual earthwork equipment allocation.
Resumo:
Identificación y caracterización del problema. Uno de los problemas más importantes asociados con la construcción de software es la corrección del mismo. En busca de proveer garantías del correcto funcionamiento del software, han surgido una variedad de técnicas de desarrollo con sólidas bases matemáticas y lógicas conocidas como métodos formales. Debido a su naturaleza, la aplicación de métodos formales requiere gran experiencia y conocimientos, sobre todo en lo concerniente a matemáticas y lógica, por lo cual su aplicación resulta costosa en la práctica. Esto ha provocado que su principal aplicación se limite a sistemas críticos, es decir, sistemas cuyo mal funcionamiento puede causar daños de magnitud, aunque los beneficios que sus técnicas proveen son relevantes a todo tipo de software. Poder trasladar los beneficios de los métodos formales a contextos de desarrollo de software más amplios que los sistemas críticos tendría un alto impacto en la productividad en tales contextos. Hipótesis. Contar con herramientas de análisis automático es un elemento de gran importancia. Ejemplos de esto son varias herramientas potentes de análisis basadas en métodos formales, cuya aplicación apunta directamente a código fuente. En la amplia mayoría de estas herramientas, la brecha entre las nociones a las cuales están acostumbrados los desarrolladores y aquellas necesarias para la aplicación de estas herramientas de análisis formal sigue siendo demasiado amplia. Muchas herramientas utilizan lenguajes de aserciones que escapan a los conocimientos y las costumbres usuales de los desarrolladores. Además, en muchos casos la salida brindada por la herramienta de análisis requiere cierto manejo del método formal subyacente. Este problema puede aliviarse mediante la producción de herramientas adecuadas. Otro problema intrínseco a las técnicas automáticas de análisis es cómo se comportan las mismas a medida que el tamaño y complejidad de los elementos a analizar crece (escalabilidad). Esta limitación es ampliamente conocida y es considerada crítica en la aplicabilidad de métodos formales de análisis en la práctica. Una forma de atacar este problema es el aprovechamiento de información y características de dominios específicos de aplicación. Planteo de objetivos. Este proyecto apunta a la construcción de herramientas de análisis formal para contribuir a la calidad, en cuanto a su corrección funcional, de especificaciones, modelos o código, en el contexto del desarrollo de software. Más precisamente, se busca, por un lado, identificar ambientes específicos en los cuales ciertas técnicas de análisis automático, como el análisis basado en SMT o SAT solving, o el model checking, puedan llevarse a niveles de escalabilidad superiores a los conocidos para estas técnicas en ámbitos generales. Se intentará implementar las adaptaciones a las técnicas elegidas en herramientas que permitan su uso a desarrolladores familiarizados con el contexto de aplicación, pero no necesariamente conocedores de los métodos o técnicas subyacentes. Materiales y métodos a utilizar. Los materiales a emplear serán bibliografía relevante al área y equipamiento informático. Métodos. Se emplearán los métodos propios de la matemática discreta, la lógica y la ingeniería de software. Resultados esperados. Uno de los resultados esperados del proyecto es la individualización de ámbitos específicos de aplicación de métodos formales de análisis. Se espera que como resultado del desarrollo del proyecto surjan herramientas de análisis cuyo nivel de usabilidad sea adecuado para su aplicación por parte de desarrolladores sin formación específica en los métodos formales utilizados. Importancia del proyecto. El principal impacto de este proyecto será la contribución a la aplicación práctica de técnicas formales de análisis en diferentes etapas del desarrollo de software, con la finalidad de incrementar su calidad y confiabilidad. A crucial factor for software quality is correcteness. Traditionally, formal approaches to software development concentrate on functional correctness, and tackle this problem basically by being based on well defined notations founded on solid mathematical grounds. This makes formal methods better suited for analysis, due to their precise semantics, but they are usually more complex, and require familiarity and experience with the manipulation of mathematical definitions. So, their acceptance by software engineers is rather restricted, and formal methods applications have been confined to critical systems. Nevertheless, it is obvious that the advantages that formal methods provide apply to any kind of software system. It is accepted that appropriate software tool support for formal analysis is essential, if one seeks providing support for software development based on formal methods. Indeed, some of the relatively recent sucesses of formal methods are accompanied by good quality tools that automate powerful analysis mechanisms, and are even integrated in widely used development environments. Still, most of these tools either concentrate on code analysis, and in many cases are still far from being simple enough to be employed by software engineers without experience in formal methods. Another important problem for the adoption of tool support for formal methods is scalability. Automated software analysis is intrinsically complex, and thus techniques do not scale well in the general case. In this project, we will attempt to identify particular modelling, design, specification or coding activities in software development processes where to apply automated formal analysis techniques. By focusing in very specific application domains, we expect to find characteristics that might be exploited to increase the scalability of the corresponding analyses, compared to the general case.
Resumo:
Seismic analysis, horizon matching, fault tracking, marked point process,stochastic annealing
Resumo:
In hyperdiploid acute lymphoblastic leukaemia (ALL), the simultaneous occurrence of specific aneuploidies confers a more favourable outcome than hyperdiploidy alone. Interphase (I) FISH complements conventional cytogenetics (CC) through its sensitivity and ability to detect chromosome aberrations in non-dividing cells. To overcome the limits of manual I-FISH, we developed an automated four-colour I-FISH approach and assessed its ability to detect concurrent aneuploidies in ALL. I-FISH was performed using centromeric probes for chromosomes 4, 6, 10 and 17. Parameters established for automatic nucleus selection and signal detection were evaluated (3 controls). Cut-off values were determined (10 controls, 1000 nuclei/case). Combinations of aneuploidies were considered relevant when each aneuploidy was individually significant. Results obtained in 10 ALL patients (1500 nuclei/patient) were compared with those by CC. Various combinations of aneuploidies were identified. All clones detected by CC were observed by I-FISH. I-FISH revealed numerous additional abnormal clones, ranging between 0.1 % and 31.6%, based on the large number of nuclei evaluated. Four-colour automated I-FISH permits the identification of concurrent aneuploidies of prognostic significance in hyperdiploid ALL. Large numbers of cells can be analysed rapidly by this method. Owing to its high sensitivity, the method provides a powerful tool for the detection of small abnormal clones at diagnosis and during follow up. Compared to CC, it generates a more detailed cytogenetic picture, the biological and clinical significance of which merits further evaluation. Once optimised for a given set of probes, the system can be easily adapted for other probe combinations.
Resumo:
In order to evaluate the Organon Teknika MB/BacT system used for testing indirect susceptibility to the alternative drugs ofloxacin (OFLO), amikacin (AMI), and rifabutin (RIF), and to the usual drugs of standard treatment regimes such as rifampin (RMP), isoniazid (INH), pyrazinamide (PZA), streptomycin (SM), ethambutol (EMB), and ethionamide (ETH), cultures of clinical specimens from 117 patients with pulmonary tuberculosis under multidrug-resistant investigation, admitted sequentially for examination from 2001 to 2002, were studied. Fifty of the Mycobacterium tuberculosis cultures were inoculated into the gold-standard BACTEC 460 TB (Becton Dickinson) for studying resistance to AMI, RIF, and OFLO, and the remaining 67 were inoculated into Lowenstein Jensen (LJ) medium (the gold standard currently used in Brazil) for studying resistance to RMP, INH, PZA, SM, EMB, and ETH. We observed 100% sensitivity for AMI (80.8-100), RIF (80.8-100), and OFLO (78.1-100); and 100% specificity for AMI (85.4-100), RIF (85.4-100), and OFLO (86.7-100) compared to the BACTEC system. Comparing the results obtained in LJ we observed 100% sensitivity for RMP (80-100), followed by INH - 95% (81.8-99.1), EMB - 94.7% (71.9-99.7), and 100% specificity for all drugs tested except for PZA - 98.3 (89.5-99.9) at 95% confidence interval. The results showed a high level of accuracy and demonstrated that the fully automated, non-radiometric MB/BacT system is indicated for routine use in susceptibility testing in public health laboratories.
Resumo:
In the context of the investigation of the use of automated fingerprint identification systems (AFIS) for the evaluation of fingerprint evidence, the current study presents investigations into the variability of scores from an AFIS system when fingermarks from a known donor are compared to fingerprints that are not from the same source. The ultimate goal is to propose a model, based on likelihood ratios, which allows the evaluation of mark-to-print comparisons. In particular, this model, through its use of AFIS technology, benefits from the possibility of using a large amount of data, as well as from an already built-in proximity measure, the AFIS score. More precisely, the numerator of the LR is obtained from scores issued from comparisons between impressions from the same source and showing the same minutia configuration. The denominator of the LR is obtained by extracting scores from comparisons of the questioned mark with a database of non-matching sources. This paper focuses solely on the assignment of the denominator of the LR. We refer to it by the generic term of between-finger variability. The issues addressed in this paper in relation to between-finger variability are the required sample size, the influence of the finger number and general pattern, as well as that of the number of minutiae included and their configuration on a given finger. Results show that reliable estimation of between-finger variability is feasible with 10,000 scores. These scores should come from the appropriate finger number/general pattern combination as defined by the mark. Furthermore, strategies of obtaining between-finger variability when these elements cannot be conclusively seen on the mark (and its position with respect to other marks for finger number) have been presented. These results immediately allow case-by-case estimation of the between-finger variability in an operational setting.
Resumo:
In hyperdiploid acute lymphoblastic leukaemia (ALL), the simultaneous occurrence of specific aneuploidies confers a more favourable outcome than hyperdiploidy alone. Interphase (I) FISH complements conventional cytogenetics (CC) through its sensitivity and ability to detect chromosome aberrations in non-dividing cells. To overcome the limits of manual I-FISH, we developed an automated four-colour I-FISH approach and assessed its ability to detect concurrent aneuploidies in ALL. I-FISH was performed using centromeric probes for chromosomes 4, 6, 10 and 17. Parameters established for automatic nucleus selection and signal detection were evaluated (3 controls). Cut-off values were determined (10 controls, 1000 nuclei/case). Combinations of aneuploidies were considered relevant when each aneuploidy was individually significant. Results obtained in 10 ALL patients (1500 nuclei/patient) were compared with those by CC. Various combinations of aneuploidies were identified. All clones detected by CC were observed by I-FISH. I-FISH revealed numerous additional abnormal clones, ranging between 0.1% and 31.6%, based on the large number of nuclei evaluated. Four-colour automated I-FISH permits the identification of concurrent aneuploidies of prognostic significance in hyperdiploid ALL. Large numbers of cells can be analysed rapidly by this method. Owing to its high sensitivity, the method provides a powerful tool for the detection of small abnormal clones at diagnosis and during follow up. Compared to CC, it generates a more detailed cytogenetic picture, the biological and clinical significance of which merits further evaluation. Once optimised for a given set of probes, the system can be easily adapted for other probe combinations.
Resumo:
HAMAP (High-quality Automated and Manual Annotation of Proteins-available at http://hamap.expasy.org/) is a system for the automatic classification and annotation of protein sequences. HAMAP provides annotation of the same quality and detail as UniProtKB/Swiss-Prot, using manually curated profiles for protein sequence family classification and expert curated rules for functional annotation of family members. HAMAP data and tools are made available through our website and as part of the UniRule pipeline of UniProt, providing annotation for millions of unreviewed sequences of UniProtKB/TrEMBL. Here we report on the growth of HAMAP and updates to the HAMAP system since our last report in the NAR Database Issue of 2013. We continue to augment HAMAP with new family profiles and annotation rules as new protein families are characterized and annotated in UniProtKB/Swiss-Prot; the latest version of HAMAP (as of 3 September 2014) contains 1983 family classification profiles and 1998 annotation rules (up from 1780 and 1720). We demonstrate how the complex logic of HAMAP rules allows for precise annotation of individual functional variants within large homologous protein families. We also describe improvements to our web-based tool HAMAP-Scan which simplify the classification and annotation of sequences, and the incorporation of an improved sequence-profile search algorithm.
Resumo:
We designed a trap system to isolate different amino acid sequences which could target proteins to the cell surface via GPI anchor transfer. This selection procedure is based on the insertion of various sequences which regenerate a functional GPI anchor signal sequence and therefore provoke re-expression at the surface of a reporter molecule. Using this trap for cell surface targeting sequences, we could show the importance of the defined elements essential for GPI anchor addition. Such a system could be used for an exhaustive analysis of the carboxyl terminus structural requirements for GPI membrane anchoring.
Resumo:
This study demonstrates that endogenously produced interferon gamma (IFN-gamma) forms the basis of a tumor surveillance system that controls development of both chemically induced and spontaneously arising tumors in mice. Compared with wild-type mice, mice lacking sensitivity to either IFN-gamma (i.e., IFN-gamma receptor-deficient mice) or all IFN family members (i.e., Stat1-deficient mice) developed tumors more rapidly and with greater frequency when challenged with different doses of the chemical carcinogen methylcholanthrene. In addition, IFN-gamma-insensitive mice developed tumors more rapidly than wild-type mice when bred onto a background deficient in the p53 tumor-suppressor gene. IFN-gamma-insensitive p53(-/-) mice also developed a broader spectrum of tumors compared with mice lacking p53 alone. Using tumor cells derived from methylcholanthrene-treated IFN-gamma-insensitive mice, we found IFN-gamma's actions to be mediated at least partly through its direct effects on the tumor cell leading to enhanced tumor cell immunogenicity. The importance and generality of this system is evidenced by the finding that certain types of human tumors become selectively unresponsive to IFN-gamma. Thus, IFN-gamma forms the basis of an extrinsic tumor-suppressor mechanism in immunocompetent hosts.
Resumo:
A recombinant baculovirus encoding a single-chain murine major histocompatibility complex class I molecule in which the first three domains of H-2Kd are fused to beta 2-microglobulin (beta 2-m) via a 15-amino acid linker has been isolated and used to infect lepidopteran cells. A soluble, 391-amino acid single-chain H-2Kd (SC-Kd) molecule of 48 kDa was synthesized and glycosylated in insect cells and could be purified in the absence of detergents by affinity chromatography using the anti-H-2Kd monoclonal antibody SF1.1.1.1. We tested the ability of SC-Kd to bind antigenic peptides using a direct binding assay based on photoaffinity labeling. The photoreactive derivative was prepared from the H-2Kd-restricted Plasmodium berghei circumsporozoite protein (P.b. CS) peptide 253-260 (YIPSAEKI), a probe that we had previously shown to be unable to bind to the H-2Kd heavy chain in infected cells in the absence of co-expressed beta 2-microglobulin. SC-Kd expressed in insect cells did not require additional mouse beta 2-m to bind the photoprobe, indicating that the covalently attached beta 2-m could substitute for the free molecule. Similarly, binding of the P.b. CS photoaffinity probe to the purified SC-Kd molecule was unaffected by the addition of exogenous beta 2-m. This is in contrast to H-2KdQ10, a soluble H-2Kd molecule in which beta 2-m is noncovalently bound to the soluble heavy chain, whose ability to bind the photoaffinity probe is greatly enhanced in the presence of an excess of exogenous beta 2-m. The binding of the probe to SC-Kd was allele-specific, since labeling was selectively inhibited only by antigenic peptides known to be presented by the H-2Kd molecule.