898 resultados para linear and nonlinear systems identification
Resumo:
Indoor positioning has attracted considerable attention for decades due to the increasing demands for location based services. In the past years, although numerous methods have been proposed for indoor positioning, it is still challenging to find a convincing solution that combines high positioning accuracy and ease of deployment. Radio-based indoor positioning has emerged as a dominant method due to its ubiquitousness, especially for WiFi. RSSI (Received Signal Strength Indicator) has been investigated in the area of indoor positioning for decades. However, it is prone to multipath propagation and hence fingerprinting has become the most commonly used method for indoor positioning using RSSI. The drawback of fingerprinting is that it requires intensive labour efforts to calibrate the radio map prior to experiments, which makes the deployment of the positioning system very time consuming. Using time information as another way for radio-based indoor positioning is challenged by time synchronization among anchor nodes and timestamp accuracy. Besides radio-based positioning methods, intensive research has been conducted to make use of inertial sensors for indoor tracking due to the fast developments of smartphones. However, these methods are normally prone to accumulative errors and might not be available for some applications, such as passive positioning. This thesis focuses on network-based indoor positioning and tracking systems, mainly for passive positioning, which does not require the participation of targets in the positioning process. To achieve high positioning accuracy, we work on some information of radio signals from physical-layer processing, such as timestamps and channel information. The contributions in this thesis can be divided into two parts: time-based positioning and channel information based positioning. First, for time-based indoor positioning (especially for narrow-band signals), we address challenges for compensating synchronization offsets among anchor nodes, designing timestamps with high resolution, and developing accurate positioning methods. Second, we work on range-based positioning methods with channel information to passively locate and track WiFi targets. Targeting less efforts for deployment, we work on range-based methods, which require much less calibration efforts than fingerprinting. By designing some novel enhanced methods for both ranging and positioning (including trilateration for stationary targets and particle filter for mobile targets), we are able to locate WiFi targets with high accuracy solely relying on radio signals and our proposed enhanced particle filter significantly outperforms the other commonly used range-based positioning algorithms, e.g., a traditional particle filter, extended Kalman filter and trilateration algorithms. In addition to using radio signals for passive positioning, we propose a second enhanced particle filter for active positioning to fuse inertial sensor and channel information to track indoor targets, which achieves higher tracking accuracy than tracking methods solely relying on either radio signals or inertial sensors.
Resumo:
This paper considers ocean fisheries as complex adaptive systems and addresses the question of how human institutions might be best matched to their structure and function. Ocean ecosystems operate at multiple scales, but the management of fisheries tends to be aimed at a single species considered at a single broad scale. The paper argues that this mismatch of ecological and management scale makes it difficult to address the fine-scale aspects of ocean ecosystems, and leads to fishing rights and strategies that tend to erode the underlying structure of populations and the system itself. A successful transition to ecosystem-based management will require institutions better able to economize on the acquisition of feedback about the impact of human activities. This is likely to be achieved by multiscale institutions whose organization mirrors the spatial organization of the ecosystem and whose communications occur through a polycentric network. Better feedback will allow the exploration of fine-scale science and the employment of fine-scale fishing restraints, better adapted to the behavior of fish and habitat. The scale and scope of individual fishing rights also needs to be congruent with the spatial structure of the ecosystem. Place-based rights can be expected to create a longer private planning horizon as well as stronger incentives for the private and public acquisition of system relevant knowledge.
Resumo:
My dissertation focuses on two aspects of RNA sequencing technology. The first is the methodology for modeling the overdispersion inherent in RNA-seq data for differential expression analysis. This aspect is addressed in three sections. The second aspect is the application of RNA-seq data to identify the CpG island methylator phenotype (CIMP) by integrating datasets of mRNA expression level and DNA methylation status. Section 1: The cost of DNA sequencing has reduced dramatically in the past decade. Consequently, genomic research increasingly depends on sequencing technology. However it remains elusive how the sequencing capacity influences the accuracy of mRNA expression measurement. We observe that accuracy improves along with the increasing sequencing depth. To model the overdispersion, we use the beta-binomial distribution with a new parameter indicating the dependency between overdispersion and sequencing depth. Our modified beta-binomial model performs better than the binomial or the pure beta-binomial model with a lower false discovery rate. Section 2: Although a number of methods have been proposed in order to accurately analyze differential RNA expression on the gene level, modeling on the base pair level is required. Here, we find that the overdispersion rate decreases as the sequencing depth increases on the base pair level. Also, we propose four models and compare them with each other. As expected, our beta binomial model with a dynamic overdispersion rate is shown to be superior. Section 3: We investigate biases in RNA-seq by exploring the measurement of the external control, spike-in RNA. This study is based on two datasets with spike-in controls obtained from a recent study. We observe an undiscovered bias in the measurement of the spike-in transcripts that arises from the influence of the sample transcripts in RNA-seq. Also, we find that this influence is related to the local sequence of the random hexamer that is used in priming. We suggest a model of the inequality between samples and to correct this type of bias. Section 4: The expression of a gene can be turned off when its promoter is highly methylated. Several studies have reported that a clear threshold effect exists in gene silencing that is mediated by DNA methylation. It is reasonable to assume the thresholds are specific for each gene. It is also intriguing to investigate genes that are largely controlled by DNA methylation. These genes are called “L-shaped” genes. We develop a method to determine the DNA methylation threshold and identify a new CIMP of BRCA. In conclusion, we provide a detailed understanding of the relationship between the overdispersion rate and sequencing depth. And we reveal a new bias in RNA-seq and provide a detailed understanding of the relationship between this new bias and the local sequence. Also we develop a powerful method to dichotomize methylation status and consequently we identify a new CIMP of breast cancer with a distinct classification of molecular characteristics and clinical features.
Resumo:
El Territorio hoy es visto como una totalidad organizada que no puede ser pensada separando cada uno de los elementos que la componen; cada uno de ellos es definido por su relación con los otros elementos. Así, un pensamiento que integra diferentes disciplinas y saberes comienza a manejar una realidad que lejos está de definir certezas inamovibles, y comienza a vislumbrar horizontes estratégicos. La adaptación a la no linealidad de las relaciones que se dan sobre el territorio, y la diferencia de velocidades en las que actúan los distintos actores, nos exige hacer de la flexibilidad una característica esencial de la metodología de planificación estratégica. La multi-causalidad de los fenómenos que estructuran el territorio nos obliga a construir criterios cualitativos, entendiendo que nos es imposible la medición de estas cadenas causales y su reconstrucción completa en el tiempo; sin dejar por ello de edificar un marco profundo de acción y transformación que responda a una realidad cierta y veraz. Los fenómenos producidos sobre el territorio nunca actúan de manera aislada, lo que implica una responsabilidad a la hora de comprender las sinergias y la restricción que afectan los resultados de los procesos desatados. La presente ponencia corresponde a la Segunda Fase del proceso de identificación estratégica de los proyectos Plan Estratégico Territorial (PET) que se inició en el año 2005; dicho Plan es llevada a cabo por la Subsecretaría de Planificación Territorial del Ministerio de Planificación Federal y fue abordado sobre la base de tres pretensiones: institucionalizar el ejercicio del pensamiento estratégico, fortalecer la metodología de trabajo transdisciplinaria y multisectorial, y diseñar un sistema de ponderación de proyectos estratégicos de infraestructura, tanto a nivel provincial como nacional, con una fuerte base cualitativa. Este proceso dio como resultado una cartera ponderada de proyectos de infraestructura conjuntamente con una metodología que permitió consolidar los equipos provinciales de planificación, tanto en su relación con los decisores políticos como con los actores de los múltiples sectores del gobierno, y en estos resultados consolidar y reforzar una cultura del pensamiento estratégico sobre el territorio
Resumo:
El Territorio hoy es visto como una totalidad organizada que no puede ser pensada separando cada uno de los elementos que la componen; cada uno de ellos es definido por su relación con los otros elementos. Así, un pensamiento que integra diferentes disciplinas y saberes comienza a manejar una realidad que lejos está de definir certezas inamovibles, y comienza a vislumbrar horizontes estratégicos. La adaptación a la no linealidad de las relaciones que se dan sobre el territorio, y la diferencia de velocidades en las que actúan los distintos actores, nos exige hacer de la flexibilidad una característica esencial de la metodología de planificación estratégica. La multi-causalidad de los fenómenos que estructuran el territorio nos obliga a construir criterios cualitativos, entendiendo que nos es imposible la medición de estas cadenas causales y su reconstrucción completa en el tiempo; sin dejar por ello de edificar un marco profundo de acción y transformación que responda a una realidad cierta y veraz. Los fenómenos producidos sobre el territorio nunca actúan de manera aislada, lo que implica una responsabilidad a la hora de comprender las sinergias y la restricción que afectan los resultados de los procesos desatados. La presente ponencia corresponde a la Segunda Fase del proceso de identificación estratégica de los proyectos Plan Estratégico Territorial (PET) que se inició en el año 2005; dicho Plan es llevada a cabo por la Subsecretaría de Planificación Territorial del Ministerio de Planificación Federal y fue abordado sobre la base de tres pretensiones: institucionalizar el ejercicio del pensamiento estratégico, fortalecer la metodología de trabajo transdisciplinaria y multisectorial, y diseñar un sistema de ponderación de proyectos estratégicos de infraestructura, tanto a nivel provincial como nacional, con una fuerte base cualitativa. Este proceso dio como resultado una cartera ponderada de proyectos de infraestructura conjuntamente con una metodología que permitió consolidar los equipos provinciales de planificación, tanto en su relación con los decisores políticos como con los actores de los múltiples sectores del gobierno, y en estos resultados consolidar y reforzar una cultura del pensamiento estratégico sobre el territorio
Resumo:
El Territorio hoy es visto como una totalidad organizada que no puede ser pensada separando cada uno de los elementos que la componen; cada uno de ellos es definido por su relación con los otros elementos. Así, un pensamiento que integra diferentes disciplinas y saberes comienza a manejar una realidad que lejos está de definir certezas inamovibles, y comienza a vislumbrar horizontes estratégicos. La adaptación a la no linealidad de las relaciones que se dan sobre el territorio, y la diferencia de velocidades en las que actúan los distintos actores, nos exige hacer de la flexibilidad una característica esencial de la metodología de planificación estratégica. La multi-causalidad de los fenómenos que estructuran el territorio nos obliga a construir criterios cualitativos, entendiendo que nos es imposible la medición de estas cadenas causales y su reconstrucción completa en el tiempo; sin dejar por ello de edificar un marco profundo de acción y transformación que responda a una realidad cierta y veraz. Los fenómenos producidos sobre el territorio nunca actúan de manera aislada, lo que implica una responsabilidad a la hora de comprender las sinergias y la restricción que afectan los resultados de los procesos desatados. La presente ponencia corresponde a la Segunda Fase del proceso de identificación estratégica de los proyectos Plan Estratégico Territorial (PET) que se inició en el año 2005; dicho Plan es llevada a cabo por la Subsecretaría de Planificación Territorial del Ministerio de Planificación Federal y fue abordado sobre la base de tres pretensiones: institucionalizar el ejercicio del pensamiento estratégico, fortalecer la metodología de trabajo transdisciplinaria y multisectorial, y diseñar un sistema de ponderación de proyectos estratégicos de infraestructura, tanto a nivel provincial como nacional, con una fuerte base cualitativa. Este proceso dio como resultado una cartera ponderada de proyectos de infraestructura conjuntamente con una metodología que permitió consolidar los equipos provinciales de planificación, tanto en su relación con los decisores políticos como con los actores de los múltiples sectores del gobierno, y en estos resultados consolidar y reforzar una cultura del pensamiento estratégico sobre el territorio
Resumo:
Nowadays computing platforms consist of a very large number of components that require to be supplied with diferent voltage levels and power requirements. Even a very small platform, like a handheld computer, may contain more than twenty diferent loads and voltage regulators. The power delivery designers of these systems are required to provide, in a very short time, the right power architecture that optimizes the performance, meets electrical specifications plus cost and size targets. The appropriate selection of the architecture and converters directly defines the performance of a given solution. Therefore, the designer needs to be able to evaluate a significant number of options in order to know with good certainty whether the selected solutions meet the size, energy eficiency and cost targets. The design dificulties of selecting the right solution arise due to the wide range of power conversion products provided by diferent manufacturers. These products range from discrete components (to build converters) to complete power conversion modules that employ diferent manufacturing technologies. Consequently, in most cases it is not possible to analyze all the alternatives (combinations of power architectures and converters) that can be built. The designer has to select a limited number of converters in order to simplify the analysis. In this thesis, in order to overcome the mentioned dificulties, a new design methodology for power supply systems is proposed. This methodology integrates evolutionary computation techniques in order to make possible analyzing a large number of possibilities. This exhaustive analysis helps the designer to quickly define a set of feasible solutions and select the best trade-off in performance according to each application. The proposed approach consists of two key steps, one for the automatic generation of architectures and other for the optimized selection of components. In this thesis are detailed the implementation of these two steps. The usefulness of the methodology is corroborated by contrasting the results using real problems and experiments designed to test the limits of the algorithms.
Resumo:
Runtime management of distributed information systems is a complex and costly activity. One of the main challenges that must be addressed is obtaining a complete and updated view of all the managed runtime resources. This article presents a monitoring architecture for heterogeneous and distributed information systems. It is composed of two elements: an information model and an agent infrastructure. The model negates the complexity and variability of these systems and enables the abstraction over non-relevant details. The infrastructure uses this information model to monitor and manage the modeled environment, performing and detecting changes in execution time. The agents infrastructure is further detailed and its components and the relationships between them are explained. Moreover, the proposal is validated through a set of agents that instrument the JEE Glassfish application server, paying special attention to support distributed configuration scenarios.
Resumo:
Knowledge management is critical for the success of virtual communities, especially in the case of distributed working groups. A representative example of this scenario is the distributed software development, where it is necessary an optimal coordination to avoid common problems such as duplicated work. In this paper the feasibility of using the workflow technology as a knowledge management system is discussed, and a practical use case is presented. This use case is an information system that has been deployed within a banking environment. It combines common workflow technology with a new conception of the interaction among participants through the extension of existing definition languages.
Resumo:
Public participation is increasingly advocated as a necessary feature of natural resources management. The EU Water Framework Directive (WFD) is such an example, as it prescribes participatory processes as necessary features in basin management plans (EC 2000). The rationale behind this mandate is that involving interest groups ideally yields higher-quality decisions, which are arguably more likely to meet public acceptance (Pahl-Wostl, 2006). Furthermore, failing to involve stakeholders in policy-making might hamper the implementation of management initiatives, as controversial decisions can lead pressure lobbies to generate public opposition (Giordano et al. 2005, Mouratiadou and Moran 2007).
Resumo:
We present the design and implementation of the and-parallel component of ACE. ACE is a computational model for the full Prolog language that simultaneously exploits both or-parallelism and independent and-parallelism. A high performance implementation of the ACE model has been realized and its performance reported in this paper. We discuss how some of the standard problems which appear when implementing and-parallel systems are solved in ACE. We then propose a number of optimizations aimed at reducing the overheads and the increased memory consumption which occur in such systems when using previously proposed solutions. Finally, we present results from an implementation of ACE which includes the optimizations proposed. The results show that ACE exploits and-parallelism with high efficiency and high speedups. Furthermore, they also show that the proposed optimizations, which are applicable to many other and-parallel systems, significantly decrease memory consumption and increase speedups and absolute performance both in forwards execution and during backtracking.
Resumo:
We describe a simple, public domain, HTML package for LP/CLP systems. The package allows generating HTML documents easily from LP/CLP systems, including HTML forms. It also provides facilities for parsing the input provided by HTML forms, as well as for creating standalone form handlers. The purpose of this document is to serve as a user's manual as well as a short description of the capabilities of the package. The package was originally developed for SICStus Prolog and the UPM &-Prolog/CIAO systems, but has been adapted to a number of popular LP/CLP systems. The document is also a WWW/HTML primer, containing sufficient information for developing medium complexity WWW applications in Prolog and other LP and CLP languages.
Resumo:
García et al. present a class of column generation (CG) algorithms for nonlinear programs. Its main motivation from a theoretical viewpoint is that under some circumstances, finite convergence can be achieved, in much the same way as for the classic simplicial decomposition method; the main practical motivation is that within the class there are certain nonlinear column generation problems that can accelerate the convergence of a solution approach which generates a sequence of feasible points. This algorithm can, for example, accelerate simplicial decomposition schemes by making the subproblems nonlinear. This paper complements the theoretical study on the asymptotic and finite convergence of these methods given in [1] with an experimental study focused on their computational efficiency. Three types of numerical experiments are conducted. The first group of test problems has been designed to study the parameters involved in these methods. The second group has been designed to investigate the role and the computation of the prolongation of the generated columns to the relative boundary. The last one has been designed to carry out a more complete investigation of the difference in computational efficiency between linear and nonlinear column generation approaches. In order to carry out this investigation, we consider two types of test problems: the first one is the nonlinear, capacitated single-commodity network flow problem of which several large-scale instances with varied degrees of nonlinearity and total capacity are constructed and investigated, and the second one is a combined traffic assignment model