978 resultados para Identification problem


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work is an initial study of a numerical method for identifying multiple leak zones in saturated unsteady flow. Using the conventional saturated groundwater flow equation, the leak identification problem is modelled as a Cauchy problem for the heat equation and the aim is to find the regions on the boundary of the solution domain where the solution vanishes, since leak zones correspond to null pressure values. This problem is ill-posed and to reconstruct the solution in a stable way, we therefore modify and employ an iterative regularizing method proposed in [1] and [2]. In this method, mixed well-posed problems obtained by changing the boundary conditions are solved for the heat operator as well as for its adjoint, to get a sequence of approximations to the original Cauchy problem. The mixed problems are solved using a Finite element method (FEM), and the numerical results indicate that the leak zones can be identified with the proposed method.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this study, a mathematical model for the production of Fructo-oligosaccharides (FOS) by Aureobasidium pullulans is developed. This model contains a relatively large set of unknown parameters, and the identification problem is analyzed using simulation data, as well as experimental data. Batch experiments were not sufficiently informative to uniquely estimate all the unknown parameters, thus, additional experiments have to be achieved in fed-batch mode to supplement the missing information. © 2015 IEEE.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The study of algorithms for active vibration control in flexible structures became an area of enormous interest for some researchers due to the innumerable requirements for better performance in mechanical systems, as for instance, aircrafts and aerospace structures. Intelligent systems, constituted for a base structure with sensors and actuators connected, are capable to guarantee the demanded conditions, through the application of diverse types of controllers. For the project of active controllers it is necessary, in general, to know a mathematical model that enable the representation in the space of states, preferential in modal coordinates to permit the truncation of the system and reduction in the order of the controllers. For practical applications of engineering, some mathematical models based in discrete-time systems cannot represent the physical problem, therefore, techniques of identification of system parameters must be used. The techniques of identification of parameters determine the unknown values through the manipulation of the input (disturbance) and output (response) signals of the system. Recently, some methods have been proposed to solve identification problems although, none of them can be considered as being universally appropriate to all the situations. This paper is addressed to an application of linear quadratic regulator controller in a structure where the damping, stiffness and mass matrices were identified through Chebyshev's polynomial functions.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

[EN]The re-identification problem has been commonly accomplished using appearance features based on salient points and color information. In this paper, we focus on the possibilities that simple geometric features obtained from depth images captured with RGB-D cameras may offer for the task, particularly working under severe illumination conditions. The results achieved for different sets of simple geometric features extracted in a top-view setup seem to provide useful descriptors for the re-identification task, which can be integrated in an ambient intelligent environment as part of a sensor network.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

A new approach to identify multivariable Hammerstein systems is proposed in this paper. By using cardinal cubic spline functions to model the static nonlinearities, the proposed method is effective in modelling processes with hard and/or coupled nonlinearities. With an appropriate transformation, the nonlinear models are parameterized such that the nonlinear identification problem is converted into a linear one. The persistently exciting condition for the transformed input is derived to ensure the estimates are consistent with the true system. A simulation study is performed to demonstrate the effectiveness of the proposed method compared with the existing approaches based on polynomials. (C) 2006 Elsevier Ltd. All rights reserved.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Person re-identification involves recognizing a person across non-overlapping camera views, with different pose, illumination, and camera characteristics. We propose to tackle this problem by training a deep convolutional network to represent a person’s appearance as a low-dimensional feature vector that is invariant to common appearance variations encountered in the re-identification problem. Specifically, a Siamese-network architecture is used to train a feature extraction network using pairs of similar and dissimilar images. We show that use of a novel multi-task learning objective is crucial for regularizing the network parameters in order to prevent over-fitting due to the small size the training dataset. We complement the verification task, which is at the heart of re-identification, by training the network to jointly perform verification, identification, and to recognise attributes related to the clothing and pose of the person in each image. Additionally, we show that our proposed approach performs well even in the challenging cross-dataset scenario, which may better reflect real-world expected performance. 

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In a number of programs for gene structure prediction in higher eukaryotic genomic sequences, exon prediction is decoupled from gene assembly: a large pool of candidate exons is predicted and scored from features located in the query DNA sequence, and candidate genes are assembled from such a pool as sequences of nonoverlapping frame-compatible exons. Genes are scored as a function of the scores of the assembled exons, and the highest scoring candidate gene is assumed to be the most likely gene encoded by the query DNA sequence. Considering additive gene scoring functions, currently available algorithms to determine such a highest scoring candidate gene run in time proportional to the square of the number of predicted exons. Here, we present an algorithm whose running time grows only linearly with the size of the set of predicted exons. Polynomial algorithms rely on the fact that, while scanning the set of predicted exons, the highest scoring gene ending in a given exon can be obtained by appending the exon to the highest scoring among the highest scoring genes ending at each compatible preceding exon. The algorithm here relies on the simple fact that such highest scoring gene can be stored and updated. This requires scanning the set of predicted exons simultaneously by increasing acceptor and donor position. On the other hand, the algorithm described here does not assume an underlying gene structure model. Indeed, the definition of valid gene structures is externally defined in the so-called Gene Model. The Gene Model specifies simply which gene features are allowed immediately upstream which other gene features in valid gene structures. This allows for great flexibility in formulating the gene identification problem. In particular it allows for multiple-gene two-strand predictions and for considering gene features other than coding exons (such as promoter elements) in valid gene structures.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Although it is commonly accepted that most macroeconomic variables are nonstationary, it is often difficult to identify the source of the non-stationarity. In particular, it is well-known that integrated and short memory models containing trending components that may display sudden changes in their parameters share some statistical properties that make their identification a hard task. The goal of this paper is to extend the classical testing framework for I(1) versus I(0)+ breaks by considering a a more general class of models under the null hypothesis: non-stationary fractionally integrated (FI) processes. A similar identification problem holds in this broader setting which is shown to be a relevant issue from both a statistical and an economic perspective. The proposed test is developed in the time domain and is very simple to compute. The asymptotic properties of the new technique are derived and it is shown by simulation that it is very well-behaved in finite samples. To illustrate the usefulness of the proposed technique, an application using inflation data is also provided.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The ability to identify letters and encode their position is a crucial step of the word recognition process. However and despite their word identification problem, the ability of dyslexic children to encode letter identity and letter-position within strings was not systematically investigated. This study aimed at filling this gap and further explored how letter identity and letter-position encoding is modulated by letter context in developmental dyslexia. For this purpose, a letter-string comparison task was administered to French dyslexic children and two chronological age (CA) and reading age (RA)-matched control groups. Children had to judge whether two successively and briefly presented four-letter strings were identical or different. Letter-position and letter identity were manipulated through the transposition (e.g., RTGM vs. RMGT) or substitution of two letters (e.g., TSHF vs. TGHD). Non-words, pseudo-words, and words were used as stimuli to investigate sub-lexical and lexical effects on letter encoding. Dyslexic children showed both substitution and transposition detection problems relative to CA-controls. A substitution advantage over transpositions was only found for words in dyslexic children whereas it extended to pseudo-words in RA-controls and to all type of items in CA-controls. Letters were better identified in the dyslexic group when belonging to orthographically familiar strings. Letter-position encoding was very impaired in dyslexic children who did not show any word context effect in contrast to CA-controls. Overall, the current findings point to a strong letter identity and letter-position encoding disorder in developmental dyslexia.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La représentation d'une surface, son lissage et son utilisation pour l'identification, la comparaison, la classification, et l'étude des variations de volume, de courbure ou de topologie sont omniprésentes dans l'aire de la numérisation. Parmi les méthodes mathématiques, nous avons retenu les transformations difféomorphiques d'un pattern de référence. Il y a un grand intérêt théorique et numérique à approcher un difféomorphisme arbitraire par des difféomorphismes engendrés par des champs de vitesses. Sur le plan théorique la question est : "est-ce que le sous-groupe de difféomorphismes engendrés par des champs de vitesses est dense dans le groupe plus large de Micheletti pour la métrique de Courant ?" Malgré quelques progrès réalisés ici, cette question demeure ouverte. Les pistes empruntées ont alors convergé vers le sous-groupe de Azencott et de Trouvé et sa métrique dans le cadre de l'imagerie. Elle correspond à une notion de géodésique entre deux difféomorphismes dans leur sous-groupe. L'optimisation est utilisée pour obtenir un système d'équations état adjoint caractérisant la solution optimale du problème d'identification à partir des observations. Cette approche est adaptée à l'identification de surfaces obtenues par un numériseur tel que, par exemple, le scan d'un visage. Ce problème est beaucoup plus difficile que celui d'imagerie. On doit alors introduire un système de référence courbe et une surface à facettes pour les calculs. On donne la formulation du problème d'identification et du calcul du changement de volume par rapport à un scan de référence.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper studies the effect of strengthening democracy, as captured by an increase in voting rights, on the incidence of violent civil conflict in nineteenth-century Colombia. Empirically studying the relationship between democracy and conflict is challenging, not only because of conceptual problems in defining and measuring democracy, but also because political institutions and violence are jointly determined. We take advantage of an experiment of history to examine the impact of one simple, measurable dimension of democracy (the size of the franchise) on con- flict, while at the same time attempting to overcome the identification problem. In 1853, Colombia established universal male suffrage. Using a simple difference-indifferences specification at the municipal level, we find that municipalities where more voters were enfranchised relative to their population experienced fewer violent political battles while the reform was in effect. The results are robust to including a number of additional controls. Moreover, we investigate the potential mechanisms driving the results. In particular, we look at which components of the proportion of new voters in 1853 explain the results, and we examine if results are stronger in places with more political competition and state capacity. We interpret our findings as suggesting that violence in nineteenth-century Colombia was a technology for political elites to compete for the rents from power, and that democracy constituted an alternative way to compete which substituted violence.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

O problema da identificação de equações de oferta e demanda de crédito para verificação da existência do canal de crédito tem sido sendo bastante discutido nas últimas décadas. Este trabalho avalia a estratégia de identificação via estimação de um modelo de um Modelo Vetorial de Correção de Erros para determinar a relevância do canal de crédito no Brasil. Foram utilizados dados agregados mensais compreendendo o período de 2001 até 2010.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

It is well known that an identification problem exists in the analysis of age-period-cohort data because of the relationship among the three factors (date of birth + age at death = date of death). There are numerous suggestions about how to analyze the data. No one solution has been satisfactory. The purpose of this study is to provide another analytic method by extending the Cox's lifetable regression model with time-dependent covariates. The new approach contains the following features: (1) It is based on the conditional maximum likelihood procedure using a proportional hazard function described by Cox (1972), treating the age factor as the underlying hazard to estimate the parameters for the cohort and period factors. (2) The model is flexible so that both the cohort and period factors can be treated as dummy or continuous variables, and the parameter estimations can be obtained for numerous combinations of variables as in a regression analysis. (3) The model is applicable even when the time period is unequally spaced.^ Two specific models are considered to illustrate the new approach and applied to the U.S. prostate cancer data. We find that there are significant differences between all cohorts and there is a significant period effect for both whites and nonwhites. The underlying hazard increases exponentially with age indicating that old people have much higher risk than young people. A log transformation of relative risk shows that the prostate cancer risk declined in recent cohorts for both models. However, prostate cancer risk declined 5 cohorts (25 years) earlier for whites than for nonwhites under the period factor model (0 0 0 1 1 1 1). These latter results are similar to the previous study by Holford (1983).^ The new approach offers a general method to analyze the age-period-cohort data without using any arbitrary constraint in the model. ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

devcon transforms the coefficients of 0/1 dummy variables so that they reflect deviations from the "grand mean" rather than deviations from the reference category (the transformed coefficients are equivalent to those obtained by the so called "effects coding") and adds the coefficient for the reference category. The variance-covariance matrix of the estimates is transformed accordingly. The transformed estimated can be used with post estimation procedures. In particular, devcon can be used to solve the identification problem for dummy variable effects in the so-called Blinder-Oaxaca decomposition (see the oaxaca package).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

AOSD'03 Practitioner Report Performance analysis is motivated as an ideal domain for benefiting from the application of Aspect Oriented (AO) technology. The experience of a ten week project to apply AO to the performance analysis domain is described. We show how all phases of a performance analysts’ activities – initial profiling, problem identification, problem analysis and solution exploration – were candidates for AO technology assistance – some being addressed with more success than others. A Profiling Workbench is described that leverages the capabilities of AspectJ, and delivers unique capabilities into the hands of developers exploring caching opportunities.