404 resultados para regularization
Resumo:
En América Latina, y en Brasil en particular, las ocupaciones informales de la tierra urbana se tornaran un fenómeno generalizado en todas las ciudades, hecho que evidencio una serie de problemas urbanos y de ineficiencia en el proveimiento de los derechos básicos de los ciudadanos, principalmente el derecho a la morada digna, con eso, trajo la necesidad de priorización de política publicas curativas, como los programas de regularización urbana, cuyo objetivo es la inserción de las ocupaciones informales en la ciudad formal, con todos los impactos que eso genera: urbanísticos, legales, sociales y económicos. La ley federal intitulado Estatuto da Cidade (EC), reglamentada en 2001, es entendida como un avanzo jurídico por justamente intentar contrabalancear ese contexto, trayendo una serie de principios e instrumentos que buscan garantizar la función social de la propiedad y de la ciudad. Esa nueva lógica, en la teoría, tiene que ser la base de todas las políticas urbanas del país. Con eso, esa tesis tiene como objetivo evaluar si, realmente, los programas de regularización urbana desarrollados en Brasil cumplen con los dictámenes de dicha legislación. Para eso, fue elegido la metodología del estudio de caso, que fue desarrollado en la ciudad de Porto Alegre, capital del Rio Grande do Sul. Primero fue analizado el Estatuto da Cidade, para la definición de los principios de evaluación, luego, fue propuesto un sistema de evaluación, que fue aplicado en los dos casos estudiados: un anterior a la promulgación del EC, el Condominio dos Anjos, que serbio como parámetro referencial, y otro desarrollado tras la promulgación de dicha legislación, el Programa Integrado Entrada da Cidade (PIEC). Tras los análisis, se puede concluir que la legislación federal efectivamente no ha tenido el reflejo necesario, como conclusiones principales se puede citar: que la legislación municipal de Porto Alegre desde la década 90 ya tenía avances considerables, incluso algunos sirvieron de ejemplo en la elaboración del EC, luego, eso puede explicar el bajo impacto percibido; y que el principal fiscalizador y delineador de la política urbana es el financiador del programa, luego, muchas estrategias y dibujos proyectuales dependen de la línea de dicha financiación. ABSTRACT In Latin America, and Brazil in particular, informal urban land occupations pervasive be turned into all cities, a fact evidenced a series of urban problems and inefficiency to provide the basic rights of citizens, mainly the right to a decent housing, with that, brought the need for prioritization of public policy, such as urban regularization programs, aimed at the inclusion of informal occupations in the formal city, with all the impacts that generates: urban, legal, social and economic. Federal law entitled Estatuto da Cidade (EC), regulated in 2001, is understood as a legal advanced for just try to counterbalance this context, bringing a number of principles and instruments that seek to guarantee the social function of property and the city. This new logic, in theory, has to be the basis of all urban policies of the country. With that, this thesis aims to assess whether urban regularization programs developed in Brazil, actually, comply with the dictates of that legislation. For that, it was elected the methodology of the case study, which was developed in the city of Porto Alegre, capital of Rio Grande do Sul, Brazil. It was first analyzed the EC, for defining the principles for evaluation, then, was proposed an evaluation system, which was applied in two case studies: one before the promulgation of the EC, the Condominio dos Anjos, which used as a reference parameter, and another developed following the enactment of this legislation, the Program Integrate Entrada da Cidade (PIEC). After the analysis, it can be concluded that the federal legislation has not actually had the reflection necessary, main conclusions can be cited: the municipal legislation in Porto Alegre, since the early 90s, had considerable progress, including some served as an example in developing the EC, then, that may explain the low perceived impact; the principal auditor and eyeliner urban policy is the founder of the program, of course, many strategies and project drawings depend on the line of financing.
Resumo:
The study of passive scalar transport in a turbulent velocity field leads naturally to the notion of generalized flows, which are families of probability distributions on the space of solutions to the associated ordinary differential equations which no longer satisfy the uniqueness theorem for ordinary differential equations. Two most natural regularizations of this problem, namely the regularization via adding small molecular diffusion and the regularization via smoothing out the velocity field, are considered. White-in-time random velocity fields are used as an example to examine the variety of phenomena that take place when the velocity field is not spatially regular. Three different regimes, characterized by their degrees of compressibility, are isolated in the parameter space. In the regime of intermediate compressibility, the two different regularizations give rise to two different scaling behaviors for the structure functions of the passive scalar. Physically, this means that the scaling depends on Prandtl number. In the other two regimes, the two different regularizations give rise to the same generalized flows even though the sense of convergence can be very different. The “one force, one solution” principle is established for the scalar field in the weakly compressible regime, and for the difference of the scalar in the strongly compressible regime, which is the regime of inverse cascade. Existence and uniqueness of an invariant measure are also proved in these regimes when the transport equation is suitably forced. Finally incomplete self similarity in the sense of Barenblatt and Chorin is established.
Resumo:
We summarize studies of earthquake fault models that give rise to slip complexities like those in natural earthquakes. For models of smooth faults between elastically deformable continua, it is critical that the friction laws involve a characteristic distance for slip weakening or evolution of surface state. That results in a finite nucleation size, or coherent slip patch size, h*. Models of smooth faults, using numerical cell size properly small compared to h*, show periodic response or complex and apparently chaotic histories of large events but have not been found to show small event complexity like the self-similar (power law) Gutenberg-Richter frequency-size statistics. This conclusion is supported in the present paper by fully inertial elastodynamic modeling of earthquake sequences. In contrast, some models of locally heterogeneous faults with quasi-independent fault segments, represented approximately by simulations with cell size larger than h* so that the model becomes "inherently discrete," do show small event complexity of the Gutenberg-Richter type. Models based on classical friction laws without a weakening length scale or for which the numerical procedure imposes an abrupt strength drop at the onset of slip have h* = 0 and hence always fall into the inherently discrete class. We suggest that the small-event complexity that some such models show will not survive regularization of the constitutive description, by inclusion of an appropriate length scale leading to a finite h*, and a corresponding reduction of numerical grid size.
Resumo:
We present simultaneous and continuous observations of the Hα, Hβ, He I D_3, Na I D_1,D_2 doublet and the Ca II H&K lines for the RS CVn system HR 1099. The spectroscopic observations were obtained during the MUSICOS 1998 campaign involving several observatories and instruments, both echelle and long-slit spectrographs. During this campaign, HR 1099 was observed almost continuously for more than 8 orbits of 2^d.8. Two large optical flares were observed, both showing an increase in the emission of Hα, Ca II H K, Hβ and He I D_3 and a strong filling-in of the Na I D_1, D_2 doublet. Contemporary photometric observations were carried out with the robotic telescopes APT-80 of Catania and Phoenix-25 of Fairborn Observatories. Maps of the distribution of the spotted regions on the photosphere of the binary components were derived using the Maximum Entropy and Tikhonov photometric regularization criteria. Rotational modulation was observed in Hα and He I D_3 in anti-correlation with the photometric light curves. Both flares occurred at the same binary phase (0.85), suggesting that these events took place in the same active region. Simultaneous X-ray observations, performed by ASM on board RXTE, show several flare-like events, some of which correlate well with the observed optical flares. Rotational modulation in the X-ray light curve has been detected with minimum flux when the less active G5 V star was in front. A possible periodicity in the X-ray flare-like events was also found.
Resumo:
Cette thèse contribue a la recherche vers l'intelligence artificielle en utilisant des méthodes connexionnistes. Les réseaux de neurones récurrents sont un ensemble de modèles séquentiels de plus en plus populaires capable en principe d'apprendre des algorithmes arbitraires. Ces modèles effectuent un apprentissage en profondeur, un type d'apprentissage machine. Sa généralité et son succès empirique en font un sujet intéressant pour la recherche et un outil prometteur pour la création de l'intelligence artificielle plus générale. Le premier chapitre de cette thèse donne un bref aperçu des sujets de fonds: l'intelligence artificielle, l'apprentissage machine, l'apprentissage en profondeur et les réseaux de neurones récurrents. Les trois chapitres suivants couvrent ces sujets de manière de plus en plus spécifiques. Enfin, nous présentons quelques contributions apportées aux réseaux de neurones récurrents. Le chapitre \ref{arxiv1} présente nos travaux de régularisation des réseaux de neurones récurrents. La régularisation vise à améliorer la capacité de généralisation du modèle, et joue un role clé dans la performance de plusieurs applications des réseaux de neurones récurrents, en particulier en reconnaissance vocale. Notre approche donne l'état de l'art sur TIMIT, un benchmark standard pour cette tâche. Le chapitre \ref{cpgp} présente une seconde ligne de travail, toujours en cours, qui explore une nouvelle architecture pour les réseaux de neurones récurrents. Les réseaux de neurones récurrents maintiennent un état caché qui représente leurs observations antérieures. L'idée de ce travail est de coder certaines dynamiques abstraites dans l'état caché, donnant au réseau une manière naturelle d'encoder des tendances cohérentes de l'état de son environnement. Notre travail est fondé sur un modèle existant; nous décrivons ce travail et nos contributions avec notamment une expérience préliminaire.
Resumo:
Blind deconvolution is the problem of recovering a sharp image and a blur kernel from a noisy blurry image. Recently, there has been a significant effort on understanding the basic mechanisms to solve blind deconvolution. While this effort resulted in the deployment of effective algorithms, the theoretical findings generated contrasting views on why these approaches worked. On the one hand, one could observe experimentally that alternating energy minimization algorithms converge to the desired solution. On the other hand, it has been shown that such alternating minimization algorithms should fail to converge and one should instead use a so-called Variational Bayes approach. To clarify this conundrum, recent work showed that a good image and blur prior is instead what makes a blind deconvolution algorithm work. Unfortunately, this analysis did not apply to algorithms based on total variation regularization. In this manuscript, we provide both analysis and experiments to get a clearer picture of blind deconvolution. Our analysis reveals the very reason why an algorithm based on total variation works. We also introduce an implementation of this algorithm and show that, in spite of its extreme simplicity, it is very robust and achieves a performance comparable to the top performing algorithms.
Resumo:
Cette thèse contribue a la recherche vers l'intelligence artificielle en utilisant des méthodes connexionnistes. Les réseaux de neurones récurrents sont un ensemble de modèles séquentiels de plus en plus populaires capable en principe d'apprendre des algorithmes arbitraires. Ces modèles effectuent un apprentissage en profondeur, un type d'apprentissage machine. Sa généralité et son succès empirique en font un sujet intéressant pour la recherche et un outil prometteur pour la création de l'intelligence artificielle plus générale. Le premier chapitre de cette thèse donne un bref aperçu des sujets de fonds: l'intelligence artificielle, l'apprentissage machine, l'apprentissage en profondeur et les réseaux de neurones récurrents. Les trois chapitres suivants couvrent ces sujets de manière de plus en plus spécifiques. Enfin, nous présentons quelques contributions apportées aux réseaux de neurones récurrents. Le chapitre \ref{arxiv1} présente nos travaux de régularisation des réseaux de neurones récurrents. La régularisation vise à améliorer la capacité de généralisation du modèle, et joue un role clé dans la performance de plusieurs applications des réseaux de neurones récurrents, en particulier en reconnaissance vocale. Notre approche donne l'état de l'art sur TIMIT, un benchmark standard pour cette tâche. Le chapitre \ref{cpgp} présente une seconde ligne de travail, toujours en cours, qui explore une nouvelle architecture pour les réseaux de neurones récurrents. Les réseaux de neurones récurrents maintiennent un état caché qui représente leurs observations antérieures. L'idée de ce travail est de coder certaines dynamiques abstraites dans l'état caché, donnant au réseau une manière naturelle d'encoder des tendances cohérentes de l'état de son environnement. Notre travail est fondé sur un modèle existant; nous décrivons ce travail et nos contributions avec notamment une expérience préliminaire.
Resumo:
In this paper we apply a new method for the determination of surface area of carbonaceous materials, using the local surface excess isotherms obtained from the Grand Canonical Monte Carlo simulation and a concept of area distribution in terms of energy well-depth of solid–fluid interaction. The range of this well-depth considered in our GCMC simulation is from 10 to 100 K, which is wide enough to cover all carbon surfaces that we dealt with (for comparison, the well-depth for perfect graphite surface is about 58 K). Having the set of local surface excess isotherms and the differential area distribution, the overall adsorption isotherm can be obtained in an integral form. Thus, given the experimental data of nitrogen or argon adsorption on a carbon material, the differential area distribution can be obtained from the inversion process, using the regularization method. The total surface area is then obtained as the area of this distribution. We test this approach with a number of data in the literature, and compare our GCMC-surface area with that obtained from the classical BET method. In general, we find that the difference between these two surface areas is about 10%, indicating the need to reliably determine the surface area with a very consistent method. We, therefore, suggest the approach of this paper as an alternative to the BET method because of the long-recognized unrealistic assumptions used in the BET theory. Beside the surface area obtained by this method, it also provides information about the differential area distribution versus the well-depth. This information could be used as a microscopic finger-print of the carbon surface. It is expected that samples prepared from different precursors and different activation conditions will have distinct finger-prints. We illustrate this with Cabot BP120, 280 and 460 samples, and the differential area distributions obtained from the adsorption of argon at 77 K and nitrogen also at 77 K have exactly the same patterns, suggesting the characteristics of this carbon.
Resumo:
In this paper, numerical simulations are used in an attempt to find optimal Source profiles for high frequency radiofrequency (RF) volume coils. Biologically loaded, shielded/unshielded circular and elliptical birdcage coils operating at 170 MHz, 300 MHz and 470 MHz are modelled using the FDTD method for both 2D and 3D cases. Taking advantage of the fact that some aspects of the electromagnetic system are linear, two approaches have been proposed for the determination of the drives for individual elements in the RF resonator. The first method is an iterative optimization technique with a kernel for the evaluation of RF fields inside an imaging plane of a human head model using pre-characterized sensitivity profiles of the individual rungs of a resonator; the second method is a regularization-based technique. In the second approach, a sensitivity matrix is explicitly constructed and a regularization procedure is employed to solve the ill-posed problem. Test simulations show that both methods can improve the B-1-field homogeneity in both focused and non-focused scenarios. While the regularization-based method is more efficient, the first optimization method is more flexible as it can take into account other issues such as controlling SAR or reshaping the resonator structures. It is hoped that these schemes and their extensions will be useful for the determination of multi-element RF drives in a variety of applications.
Resumo:
Time-harmonic methods are required in the accurate design of RF coils as operating frequency increases. This paper presents such a method to find a current density solution on the coil that will induce some desired magnetic field upon an asymmetrically located target region within. This inverse method appropriately considers the geometry of the coil via a Fourier series expansion, and incorporates some new regularization penalty functions in the solution process. A new technique is introduced by which the complex, time-dependent current density solution is approximated by a static coil winding pattern. Several winding pattern solutions are given, with more complex winding patterns corresponding to more desirable induced magnetic fields.
Resumo:
Radio-frequency ( RF) coils are designed such that they induce homogeneous magnetic fields within some region of interest within a magnetic resonance imaging ( MRI) scanner. Loading the scanner with a patient disrupts the homogeneity of these fields and can lead to a considerable degradation of the quality of the acquired image. In this paper, an inverse method is presented for designing RF coils, in which the presence of a load ( patient) within the MRI scanner is accounted for in the model. To approximate the finite length of the coil, a Fourier series expansion is considered for the coil current density and for the induced fields. Regularization is used to solve this ill-conditioned inverse problem for the unknown Fourier coefficients. That is, the error between the induced and homogeneous target fields is minimized along with an additional constraint, chosen in this paper to represent the curvature of the coil windings. Smooth winding patterns are obtained for both unloaded and loaded coils. RF fields with a high level of homogeneity are obtained in the unloaded case and a limit to the level of homogeneity attainable is observed in the loaded case.
Resumo:
Calibration of a groundwater model requires that hydraulic properties be estimated throughout a model domain. This generally constitutes an underdetermined inverse problem, for which a Solution can only be found when some kind of regularization device is included in the inversion process. Inclusion of regularization in the calibration process can be implicit, for example through the use of zones of constant parameter value, or explicit, for example through solution of a constrained minimization problem in which parameters are made to respect preferred values, or preferred relationships, to the degree necessary for a unique solution to be obtained. The cost of uniqueness is this: no matter which regularization methodology is employed, the inevitable consequence of its use is a loss of detail in the calibrated field. This, ill turn, can lead to erroneous predictions made by a model that is ostensibly well calibrated. Information made available as a by-product of the regularized inversion process allows the reasons for this loss of detail to be better understood. In particular, it is easily demonstrated that the estimated value for an hydraulic property at any point within a model domain is, in fact, a weighted average of the true hydraulic property over a much larger area. This averaging process causes loss of resolution in the estimated field. Where hydraulic conductivity is the hydraulic property being estimated, high averaging weights exist in areas that are strategically disposed with respect to measurement wells, while other areas may contribute very little to the estimated hydraulic conductivity at any point within the model domain, this possibly making the detection of hydraulic conductivity anomalies in these latter areas almost impossible. A study of the post-calibration parameter field covariance matrix allows further insights into the loss of system detail incurred through the calibration process to be gained. A comparison of pre- and post-calibration parameter covariance matrices shows that the latter often possess a much smaller spectral bandwidth than the former. It is also demonstrated that, as all inevitable consequence of the fact that a calibrated model cannot replicate every detail of the true system, model-to-measurement residuals can show a high degree of spatial correlation, a fact which must be taken into account when assessing these residuals either qualitatively, or quantitatively in the exploration of model predictive uncertainty. These principles are demonstrated using a synthetic case in which spatial parameter definition is based oil pilot points, and calibration is Implemented using both zones of piecewise constancy and constrained minimization regularization. (C) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Government agencies responsible for riparian environments are assessing the combined utility of field survey and remote sensing for mapping and monitoring indicators of riparian zone health. The objective of this work was to determine if the structural attributes of savanna riparian zones in northern Australia can be detected from commercially available remotely sensed image data. Two QuickBird images and coincident field data covering sections of the Daly River and the South Alligator River - Barramundie Creek in the Northern Territory were used. Semi-variograms were calculated to determine the characteristic spatial scales of riparian zone features, both vegetative and landform. Interpretation of semi-variograms showed that structural dimensions of riparian environments could be detected and estimated from the QuickBird image data. The results also show that selecting the correct spatial resolution and spectral bands is essential to maximize the accuracy of mapping spatial characteristics of savanna riparian features. The distribution of foliage projective cover of riparian vegetation affected spectral reflectance variations in individual spectral bands differently. Pan-sharpened image data enabled small-scale information extraction (< 6 m) on riparian zone structural parameters. The semi-variogram analysis results provide the basis for an inversion approach using high spatial resolution satellite image data to map indicators of savanna riparian zone health.
Resumo:
The performance of feed-forward neural networks in real applications can be often be improved significantly if use is made of a-priori information. For interpolation problems this prior knowledge frequently includes smoothness requirements on the network mapping, and can be imposed by the addition to the error function of suitable regularization terms. The new error function, however, now depends on the derivatives of the network mapping, and so the standard back-propagation algorithm cannot be applied. In this paper, we derive a computationally efficient learning algorithm, for a feed-forward network of arbitrary topology, which can be used to minimize the new error function. Networks having a single hidden layer, for which the learning algorithm simplifies, are treated as a special case.
Resumo:
We study the effect of regularization in an on-line gradient-descent learning scenario for a general two-layer student network with an arbitrary number of hidden units. Training examples are randomly drawn input vectors labelled by a two-layer teacher network with an arbitrary number of hidden units which may be corrupted by Gaussian output noise. We examine the effect of weight decay regularization on the dynamical evolution of the order parameters and generalization error in various phases of the learning process, in both noiseless and noisy scenarios.