524 resultados para implements
Resumo:
Background: Decreasing costs of DNA sequencing have made prokaryotic draft genome sequences increasingly common. A contig scaffold is an ordering of contigs in the correct orientation. A scaffold can help genome comparisons and guide gap closure efforts. One popular technique for obtaining contig scaffolds is to map contigs onto a reference genome. However, rearrangements that may exist between the query and reference genomes may result in incorrect scaffolds, if these rearrangements are not taken into account. Large-scale inversions are common rearrangement events in prokaryotic genomes. Even in draft genomes it is possible to detect the presence of inversions given sufficient sequencing coverage and a sufficiently close reference genome. Results: We present a linear-time algorithm that can generate a set of contig scaffolds for a draft genome sequence represented in contigs given a reference genome. The algorithm is aimed at prokaryotic genomes and relies on the presence of matching sequence patterns between the query and reference genomes that can be interpreted as the result of large-scale inversions; we call these patterns inversion signatures. Our algorithm is capable of correctly generating a scaffold if at least one member of every inversion signature pair is present in contigs and no inversion signatures have been overwritten in evolution. The algorithm is also capable of generating scaffolds in the presence of any kind of inversion, even though in this general case there is no guarantee that all scaffolds in the scaffold set will be correct. We compare the performance of SIS, the program that implements the algorithm, to seven other scaffold-generating programs. The results of our tests show that SIS has overall better performance. Conclusions: SIS is a new easy-to-use tool to generate contig scaffolds, available both as stand-alone and as a web server. The good performance of SIS in our tests adds evidence that large-scale inversions are widespread in prokaryotic genomes.
Resumo:
We present a method of generation of exact and explicit forms of one-sided, heavy-tailed Levy stable probability distributions g(alpha)(x), 0 <= x < infinity, 0 < alpha < 1. We demonstrate that the knowledge of one such a distribution g a ( x) suffices to obtain exactly g(alpha)p ( x), p = 2, 3, .... Similarly, from known g(alpha)(x) and g(beta)(x), 0 < alpha, beta < 1, we obtain g(alpha beta)( x). The method is based on the construction of the integral operator, called Levy transform, which implements the above operations. For a rational, alpha = l/k with l < k, we reproduce in this manner many of the recently obtained exact results for g(l/k)(x). This approach can be also recast as an application of the Efros theorem for generalized Laplace convolutions. It relies solely on efficient definite integration. (C) 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.4709443]
Resumo:
Companies are currently choosing to integrate logics and systems to achieve better solutions. These combinations also include companies striving to join the logic of material requirement planning (MRP) system with the systems of lean production. The purpose of this article was to design an MRP as part of the implementation of an enterprise resource planning (ERP) in a company that produces agricultural implements, which has used the lean production system since 1998. This proposal is based on the innovation theory, theory networks, lean production systems, ERP systems and the hybrid production systems, which use both components and MRP systems, as concepts of lean production systems. The analytical approach of innovation networks enables verification of the links and relationships among the companies and departments of the same corporation. The analysis begins with the MRP implementation project carried out in a Brazilian metallurgical company and follows through the operationalisation of the MRP project, until its production stabilisation. The main point is that the MRP system should help the company's operations with regard to its effective agility to respond in time to demand fluctuations, facilitating the creation process and controlling the branch offices in other countries that use components produced in the matrix, hence ensuring more accurate estimates of stockpiles. Consequently, it presents the enterprise knowledge development organisational modelling methodology in order to represent further models (goals, actors and resources, business rules, business process and concepts) that should be included in this MRP implementation process for the new configuration of the production system.
Resumo:
The ventrolateral caudoputamen (VLCP) is well known to participate in the control of orofacial movements and forepaw usage accompanying feeding behavior. Previous studies from our laboratory have shown that insect hunting is associated with a distinct Fos up-regulation in the VLCP at intermediate rostro-caudal levels. Moreover, using the reversible blockade with lidocaine, we have previously suggested that the VLCP implements the stereotyped actions seen during prey capture and handling, and may influence the motivational drive to start attacking the roaches, as well. However, considering that (1) lidocaine suppresses action potentials not only in neurons, but also in fibers-of-passage, rendering the observed behavioral effect not specific to the ventrolateral caudoputamen; (2) the short lidocaine-induced inactivation period had left a relatively narrow window to observe the behavioral changes; and (3) that the restriction stress to inject the drug could have also disturbed hunting behavior, in the present study, we have examined the role of the VLCP in predatory hunting by placing bilateral NMDA lesions three weeks previous to the behavior testing. We were able to confirm that the VLCP serves to implement the stereotyped sequence of actions seen during prey capture and handling, but the study did not confirm its role in influencing the motivational drive to hunt. Together with other studies from our group, the present work serves as an important piece of information that helps to reveal the neural systems underlying predatory hunting. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
Background: Ontologies have increasingly been used in the biomedical domain, which has prompted the emergence of different initiatives to facilitate their development and integration. The Open Biological and Biomedical Ontologies (OBO) Foundry consortium provides a repository of life-science ontologies, which are developed according to a set of shared principles. This consortium has developed an ontology called OBO Relation Ontology aiming at standardizing the different types of biological entity classes and associated relationships. Since ontologies are primarily intended to be used by humans, the use of graphical notations for ontology development facilitates the capture, comprehension and communication of knowledge between its users. However, OBO Foundry ontologies are captured and represented basically using text-based notations. The Unified Modeling Language (UML) provides a standard and widely-used graphical notation for modeling computer systems. UML provides a well-defined set of modeling elements, which can be extended using a built-in extension mechanism named Profile. Thus, this work aims at developing a UML profile for the OBO Relation Ontology to provide a domain-specific set of modeling elements that can be used to create standard UML-based ontologies in the biomedical domain. Results: We have studied the OBO Relation Ontology, the UML metamodel and the UML profiling mechanism. Based on these studies, we have proposed an extension to the UML metamodel in conformance with the OBO Relation Ontology and we have defined a profile that implements the extended metamodel. Finally, we have applied the proposed UML profile in the development of a number of fragments from different ontologies. Particularly, we have considered the Gene Ontology (GO), the PRotein Ontology (PRO) and the Xenopus Anatomy and Development Ontology (XAO). Conclusions: The use of an established and well-known graphical language in the development of biomedical ontologies provides a more intuitive form of capturing and representing knowledge than using only text-based notations. The use of the profile requires the domain expert to reason about the underlying semantics of the concepts and relationships being modeled, which helps preventing the introduction of inconsistencies in an ontology under development and facilitates the identification and correction of errors in an already defined ontology.
Resumo:
Abstract Background The search for enriched (aka over-represented or enhanced) ontology terms in a list of genes obtained from microarray experiments is becoming a standard procedure for a system-level analysis. This procedure tries to summarize the information focussing on classification designs such as Gene Ontology, KEGG pathways, and so on, instead of focussing on individual genes. Although it is well known in statistics that association and significance are distinct concepts, only the former approach has been used to deal with the ontology term enrichment problem. Results BayGO implements a Bayesian approach to search for enriched terms from microarray data. The R source-code is freely available at http://blasto.iq.usp.br/~tkoide/BayGO in three versions: Linux, which can be easily incorporated into pre-existent pipelines; Windows, to be controlled interactively; and as a web-tool. The software was validated using a bacterial heat shock response dataset, since this stress triggers known system-level responses. Conclusion The Bayesian model accounts for the fact that, eventually, not all the genes from a given category are observable in microarray data due to low intensity signal, quality filters, genes that were not spotted and so on. Moreover, BayGO allows one to measure the statistical association between generic ontology terms and differential expression, instead of working only with the common significance analysis.
Resumo:
[ES] Webcam App es una aplicación que tiene como principal objetivo social que las personas puedan realizar videoconferencias a través de la web de forma gratuita y sencilla. Para el desarrollo de la misma, fueron de gran utilidad los elementos que brinda HTML5.0 para dar soporte multimedia: y . También, se usan dos de las APIs que implementa WebRTC para la trasmisión de audio y video en tiempo real, obtenidos desde la webcam: MediaStream (getUserMedia) y RTCPeerConnection. Para soportar esta aplicación se elige Node.js como servidor web, pues entre sus puntos fuertes está la capacidad de mantener varias conexiones abiertas, característica fundamental en una aplicación de videollamadas, donde miles de usuarios crean y envían solicitudes de conexión simultáneamente. Con el fin de aportarle una apariencia agradable a la aplicación, un entorno usable y conocido para los usuarios, se utiliza CMS Elgg como marco de red social. CMS Elgg provee de funcionalidades comunes, como por ejemplo: conectar con amigos, enviar mensajes, compartir contenido. Como metodología base se usa el Proceso Unificado de Desarrollo de Software, posibilitando que la realización de este trabajo se haya hecho de una manera organizada y se obtuvieran artefactos para el desarrollo. Como resultado del trabajo, se obtiene una solución Open Source que sirve como un modelo de comunicación en tiempo real sin necesidad de descargar, instalar o actualizar ningún complemento de terceros y que demuestra la fiabilidad de los sistemas basados en HTML5 y WebRTC.
Resumo:
The increasing diffusion of wireless-enabled portable devices is pushing toward the design of novel service scenarios, promoting temporary and opportunistic interactions in infrastructure-less environments. Mobile Ad Hoc Networks (MANET) are the general model of these higly dynamic networks that can be specialized, depending on application cases, in more specific and refined models such as Vehicular Ad Hoc Networks and Wireless Sensor Networks. Two interesting deployment cases are of increasing relevance: resource diffusion among users equipped with portable devices, such as laptops, smart phones or PDAs in crowded areas (termed dense MANET) and dissemination/indexing of monitoring information collected in Vehicular Sensor Networks. The extreme dynamicity of these scenarios calls for novel distributed protocols and services facilitating application development. To this aim we have designed middleware solutions supporting these challenging tasks. REDMAN manages, retrieves, and disseminates replicas of software resources in dense MANET; it implements novel lightweight protocols to maintain a desired replication degree despite participants mobility, and efficiently perform resource retrieval. REDMAN exploits the high-density assumption to achieve scalability and limited network overhead. Sensed data gathering and distributed indexing in Vehicular Networks raise similar issues: we propose a specific middleware support, called MobEyes, exploiting node mobility to opportunistically diffuse data summaries among neighbor vehicles. MobEyes creates a low-cost opportunistic distributed index to query the distributed storage and to determine the location of needed information. Extensive validation and testing of REDMAN and MobEyes prove the effectiveness of our original solutions in limiting communication overhead while maintaining the required accuracy of replication degree and indexing completeness, and demonstrates the feasibility of the middleware approach.
Resumo:
Recent statistics have demonstrated that two of the most important causes of failures of the UAVs (Uninhabited Aerial Vehicle) missions are related to the low level of decisional autonomy of vehicles and to the man machine interface. Therefore, a relevant issue is to design a display/controls architecture which allows the efficient interaction between the operator and the remote vehicle and to develop a level of automation which allows the vehicle the decision about change in mission. The research presented in this paper focuses on a modular man-machine interface simulator for the UAV control, which simulates UAV missions, developed to experiment solution to this problem. The main components of the simulator are an advanced interface and a block defined automation, which comprehend an algorithm that implements the level of automation of the system. The simulator has been designed and developed following a user-centred design approach in order to take into account the operator’s needs in the communication with the vehicle. The level of automation has been developed following the supervisory control theory which says that the human became a supervisor who sends high level commands, such as part of mission, target, constraints, in then-rule, while the vehicle receives, comprehends and translates such commands into detailed action such as routes or action on the control system. In order to allow the vehicle to calculate and recalculate the safe and efficient route, in term of distance, time and fuel a 3D planning algorithm has been developed. It is based on considering UASs representative of real world systems as objects moving in a virtual environment (terrain, obstacles, and no fly zones) which replicates the airspace. Original obstacle avoidance strategies have been conceived in order to generate mission planes which are consistent with flight rules and with the vehicle performance constraints. The interface is based on a touch screen, used to send high level commands to the vehicle, and a 3D Virtual Display which provides a stereoscopic and augmented visualization of the complex scenario in which the vehicle operates. Furthermore, it is provided with an audio feedback message generator. Simulation tests have been conducted with pilot trainers to evaluate the reliability of the algorithm and the effectiveness and efficiency of the interface in supporting the operator in the supervision of an UAV mission. Results have revealed that the planning algorithm calculate very efficient routes in few seconds, an adequate level of workload is required to command the vehicle and that the 3D based interface provides the operator with a good sense of presence and enhances his awareness of the mission scenario and of the vehicle under his control.
Resumo:
Programa de doctorado: Ingeniería de Telecomunicación Avanzada.
Resumo:
Con il trascorrere del tempo, le reti di stazioni permanenti GNSS (Global Navigation Satellite System) divengono sempre più un valido supporto alle tecniche di rilevamento satellitare. Esse sono al tempo stesso un’efficace materializzazione del sistema di riferimento e un utile ausilio ad applicazioni di rilevamento topografico e di monitoraggio per il controllo di deformazioni. Alle ormai classiche applicazioni statiche in post-processamento, si affiancano le misure in tempo reale sempre più utilizzate e richieste dall’utenza professionale. In tutti i casi risulta molto importante la determinazione di coordinate precise per le stazioni permanenti, al punto che si è deciso di effettuarla tramite differenti ambienti di calcolo. Sono stati confrontati il Bernese, il Gamit (che condividono l’approccio differenziato) e il Gipsy (che utilizza l’approccio indifferenziato). L’uso di tre software ha reso indispensabile l’individuazione di una strategia di calcolo comune in grado di garantire che, i dati ancillari e i parametri fisici adottati, non costituiscano fonte di diversificazione tra le soluzioni ottenute. L’analisi di reti di dimensioni nazionali oppure di reti locali per lunghi intervalli di tempo, comporta il processamento di migliaia se non decine di migliaia di file; a ciò si aggiunge che, talora a causa di banali errori, oppure al fine di elaborare test scientifici, spesso risulta necessario reiterare le elaborazioni. Molte risorse sono quindi state investite nella messa a punto di procedure automatiche finalizzate, da un lato alla preparazione degli archivi e dall’altro all’analisi dei risultati e al loro confronto qualora si sia in possesso di più soluzioni. Dette procedure sono state sviluppate elaborando i dataset più significativi messi a disposizione del DISTART (Dipartimento di Ingegneria delle Strutture, dei Trasporti, delle Acque, del Rilevamento del Territorio - Università di Bologna). E’ stato così possibile, al tempo stesso, calcolare la posizione delle stazioni permanenti di alcune importanti reti locali e nazionali e confrontare taluni fra i più importanti codici scientifici che assolvono a tale funzione. Per quanto attiene il confronto fra i diversi software si è verificato che: • le soluzioni ottenute dal Bernese e da Gamit (i due software differenziati) sono sempre in perfetto accordo; • le soluzioni Gipsy (che utilizza il metodo indifferenziato) risultano, quasi sempre, leggermente più disperse rispetto a quelle degli altri software e mostrano talvolta delle apprezzabili differenze numeriche rispetto alle altre soluzioni, soprattutto per quanto attiene la coordinata Est; le differenze sono però contenute in pochi millimetri e le rette che descrivono i trend sono comunque praticamente parallele a quelle degli altri due codici; • il citato bias in Est tra Gipsy e le soluzioni differenziate, è più evidente in presenza di determinate combinazioni Antenna/Radome e sembra essere legato all’uso delle calibrazioni assolute da parte dei diversi software. E’ necessario altresì considerare che Gipsy è sensibilmente più veloce dei codici differenziati e soprattutto che, con la procedura indifferenziata, il file di ciascuna stazione di ciascun giorno, viene elaborato indipendentemente dagli altri, con evidente maggior elasticità di gestione: se si individua un errore strumentale su di una singola stazione o se si decide di aggiungere o togliere una stazione dalla rete, non risulta necessario il ricalcolo dell’intera rete. Insieme alle altre reti è stato possibile analizzare la Rete Dinamica Nazionale (RDN), non solo i 28 giorni che hanno dato luogo alla sua prima definizione, bensì anche ulteriori quattro intervalli temporali di 28 giorni, intercalati di sei mesi e che coprono quindi un intervallo temporale complessivo pari a due anni. Si è così potuto verificare che la RDN può essere utilizzata per l’inserimento in ITRF05 (International Terrestrial Reference Frame) di una qualsiasi rete regionale italiana nonostante l’intervallo temporale ancora limitato. Da un lato sono state stimate le velocità ITRF (puramente indicative e non ufficiali) delle stazioni RDN e, dall’altro, è stata effettuata una prova di inquadramento di una rete regionale in ITRF, tramite RDN, e si è verificato che non si hanno differenze apprezzabili rispetto all’inquadramento in ITRF, tramite un congruo numero di stazioni IGS/EUREF (International GNSS Service / European REference Frame, SubCommission for Europe dello International Association of Geodesy).
Resumo:
Stocks’ overexploitation and socio-economic sustainability are two major issues currently at stake in European fisheries. In this view the European Commission is considering the implementation of management plans as a means to move towards a longer-term perspective on fisheries management, to consider regional differences and to increase stakeholder involvement. Adriatic small pelagic species (anchovies and sardines) are some of the most studied species in the world from a biologic perspective; several economic analysis have also been realised on Italian pelagic fishery; despite that, no complete bioeconomic modelization has been carried out yet considering all biologic, technical and economic issues. Bioeconomic models cannot be considered foolproof tools but are important implements to help decision makers and can supply a fundamental scientific basis for management plans. This research gathers all available information (from biologic, technologic and economic perspectives) in order to carry out a bioeconomic model of the Adriatic pelagic fishery. Different approaches are analyzed and some of them developed to highlight potential divergences in results, characteristics and implications. Growth, production and demand functions are estimated. A formal analysis about interaction and competition between Italian and Croatian fleet is examined proposing different equilibriums for open access, duopoly and a form of cooperative solution. Anyway normative judgments are limited because of poor knowledge of population dynamics and data related to the Croatian fleet.
Resumo:
La ricerca ha per oggetto la messa a punto e applicazione di un approccio metaprogettuale finalizzato alla definizione di criteri di qualità architettonica e paesaggistica nella progettazione di aziende vitivinicole medio-piccole, che effettuano la trasformazione della materia prima, prevalentemente di propria produzione. L’analisi della filiera vitivinicola, della letteratura scientifica, della normativa di settore, di esempi di “architetture del vino eccellenti” hanno esplicitato come prevalentemente vengano indagate cantine industriali ed aspetti connessi con l'innovazione tecnologica delle attrezzature. Soluzioni costruttive e tecnologiche finalizzate alla qualità architettonica ed ambientale, attuali dinamiche riguardanti il turismo enogastronomico, nuove funzionalità aziendali, problematiche legate alla sostenibilità dell’intervento risultano ancora poco esplorate, specialmente con riferimento a piccole e medie aziende vitivinicole. Assunto a riferimento il territorio ed il sistema costruito del Nuovo Circondario Imolese (areale rappresentativo per vocazione ed espressione produttiva del comparto vitivinicolo emiliano-romagnolo) è stato identificato un campione di aziende con produzioni annue non superiori ai 5000 hl. Le analisi svolte sul campione hanno permesso di determinare: modalità di aggregazione funzionale degli spazi costruiti, relazioni esistenti con il paesaggio, aspetti distributivi e materico-costruttivi, dimensioni di massima dei locali funzionali alla produzione. Il caso studio relativo alla riqualificazione di un’azienda rappresentativa del comparto è stato utilizzato per la messa a punto e sperimentazione di criteri di progettazione guidati da valutazioni relative alle prestazioni energetiche, alla qualità architettonica e alla sostenibilità ambientale, economica e paesaggistica. L'analisi costi-benefici (pur non considerando le ricadute positive in termini di benessere degli occupanti ed il guadagno della collettività in termini di danni collegati all’inquinamento che vengono evitati in architetture progettate per garantire qualità ambientale interna ed efficienza energetica) ha esplicitato il ritorno in pochi anni dell’investimento proposto, nonostante gli ancora elevati costi di materiali di qualità e dei componenti per il corretto controllo climatico delle costruzioni.
Resumo:
This thesis tackles the problem of the automated detection of the atmospheric boundary layer (BL) height, h, from aerosol lidar/ceilometer observations. A new method, the Bayesian Selective Method (BSM), is presented. It implements a Bayesian statistical inference procedure which combines in an statistically optimal way different sources of information. Firstly atmospheric stratification boundaries are located from discontinuities in the ceilometer back-scattered signal. The BSM then identifies the discontinuity edge that has the highest probability to effectively mark the BL height. Information from the contemporaneus physical boundary layer model simulations and a climatological dataset of BL height evolution are combined in the assimilation framework to assist this choice. The BSM algorithm has been tested for four months of continuous ceilometer measurements collected during the BASE:ALFA project and is shown to realistically diagnose the BL depth evolution in many different weather conditions. Then the BASE:ALFA dataset is used to investigate the boundary layer structure in stable conditions. Functions from the Obukhov similarity theory are used as regression curves to fit observed velocity and temperature profiles in the lower half of the stable boundary layer. Surface fluxes of heat and momentum are best-fitting parameters in this exercise and are compared with what measured by a sonic anemometer. The comparison shows remarkable discrepancies, more evident in cases for which the bulk Richardson number turns out to be quite large. This analysis supports earlier results, that surface turbulent fluxes are not the appropriate scaling parameters for profiles of mean quantities in very stable conditions. One of the practical consequences is that boundary layer height diagnostic formulations which mainly rely on surface fluxes are in disagreement to what obtained by inspecting co-located radiosounding profiles.
Resumo:
Over the last 60 years, computers and software have favoured incredible advancements in every field. Nowadays, however, these systems are so complicated that it is difficult – if not challenging – to understand whether they meet some requirement or are able to show some desired behaviour or property. This dissertation introduces a Just-In-Time (JIT) a posteriori approach to perform the conformance check to identify any deviation from the desired behaviour as soon as possible, and possibly apply some corrections. The declarative framework that implements our approach – entirely developed on the promising open source forward-chaining Production Rule System (PRS) named Drools – consists of three components: 1. a monitoring module based on a novel, efficient implementation of Event Calculus (EC), 2. a general purpose hybrid reasoning module (the first of its genre) merging temporal, semantic, fuzzy and rule-based reasoning, 3. a logic formalism based on the concept of expectations introducing Event-Condition-Expectation rules (ECE-rules) to assess the global conformance of a system. The framework is also accompanied by an optional module that provides Probabilistic Inductive Logic Programming (PILP). By shifting the conformance check from after execution to just in time, this approach combines the advantages of many a posteriori and a priori methods proposed in literature. Quite remarkably, if the corrective actions are explicitly given, the reactive nature of this methodology allows to reconcile any deviations from the desired behaviour as soon as it is detected. In conclusion, the proposed methodology brings some advancements to solve the problem of the conformance checking, helping to fill the gap between humans and the increasingly complex technology.