950 resultados para Initial Value Problem
Resumo:
In this paper, we investigate the behavior of a family of steady-state solutions of a nonlinear reaction diffusion equation when some reaction and potential terms are concentrated in a e-neighborhood of a portion G of the boundary. We assume that this e-neighborhood shrinks to G as the small parameter e goes to zero. Also, we suppose the upper boundary of this e-strip presents a highly oscillatory behavior. Our main goal here was to show that this family of solutions converges to the solutions of a limit problem, a nonlinear elliptic equation that captures the oscillatory behavior. Indeed, the reaction term and concentrating potential are transformed into a flux condition and a potential on G, which depends on the oscillating neighborhood. Copyright (C) 2012 John Wiley & Sons, Ltd.
Resumo:
We propose simple heuristics for the assembly line worker assignment and balancing problem. This problem typically occurs in assembly lines in sheltered work centers for the disabled. Different from the well-known simple assembly line balancing problem, the task execution times vary according to the assigned worker. We develop a constructive heuristic framework based on task and worker priority rules defining the order in which the tasks and workers should be assigned to the workstations. We present a number of such rules and compare their performance across three possible uses: as a stand-alone method, as an initial solution generator for meta-heuristics, and as a decoder for a hybrid genetic algorithm. Our results show that the heuristics are fast, they obtain good results as a stand-alone method and are efficient when used as a initial solution generator or as a solution decoder within more elaborate approaches.
Resumo:
A transmission problem involving two Euler-Bernoulli equations modeling the vibrations of a composite beam is studied. Assuming that the beam is clamped at one extremity, and resting on an elastic bearing at the other extremity, the existence of a unique global solution and decay rates of the energy are obtained by adding just one damping device at the end containing the bearing mechanism.
Resumo:
The heating of the solar corona has been investigated during four of decades and several mechanisms able to produce heating have been proposed. It has until now not been possible to produce quantitative estimates that would establish any of these heating mechanism as the most important in the solar corona. In order to investigate which heating mechanism is the most important, a more detailed approach is needed. In this thesis, the heating problem is approached ”ab initio”, using well observed facts and including realistic physics in a 3D magneto-hydrodynamic simulation of a small part of the solar atmosphere. The ”engine” of the heating mechanism is the solar photospheric velocity field, that braids the magnetic field into a configuration where energy has to be dissipated. The initial magnetic field is taken from an observation of a typical magnetic active region scaled down to fit inside the computational domain. The driving velocity field is generated by an algorithm that reproduces the statistical and geometrical fingerprints of solar granulation. Using a standard model atmosphere as the thermal initial condition, the simulation goes through a short startup phase, where the initial thermal stratification is quickly forgotten, after which the simulation stabilizes in statistical equilibrium. In this state, the magnetic field is able to dissipate the same amount of energy as is estimated to be lost through radiation, which is the main energy loss mechanism in the solar corona. The simulation produces heating that is intermittent on the smallest resolved scales and hot loops similar to those observed through narrow band filters in the ultra violet. Other observed characteristics of the heating are reproduced, as well as a coronal temperature of roughly one million K. Because of the ab initio approach, the amount of heating produced in these simulations represents a lower limit to coronal heating and the conclusion is that such heating of the corona is unavoidable.
Resumo:
[EN] In this paper, we have used Geographical Information Systems (GIS) to solve the planar Huff problem considering different demand distributions and forbidden regions. Most of the papers connected with the competitive location problems consider that the demand is aggregated in a finite set of points. In other few cases, the models suppose that the demand is distributed along the feasible region according to a functional form, mainly a uniform distribution. In this case, in addition to the discrete and uniform demand distributions we have considered that the demand is represented by a population surface model, that is, a raster map where each pixel has associated a value corresponding to the population living in the area that it covers...
Resumo:
This work deals with some classes of linear second order partial differential operators with non-negative characteristic form and underlying non- Euclidean structures. These structures are determined by families of locally Lipschitz-continuous vector fields in RN, generating metric spaces of Carnot- Carath´eodory type. The Carnot-Carath´eodory metric related to a family {Xj}j=1,...,m is the control distance obtained by minimizing the time needed to go from two points along piecewise trajectories of vector fields. We are mainly interested in the causes in which a Sobolev-type inequality holds with respect to the X-gradient, and/or the X-control distance is Doubling with respect to the Lebesgue measure in RN. This study is divided into three parts (each corresponding to a chapter), and the subject of each one is a class of operators that includes the class of the subsequent one. In the first chapter, after recalling “X-ellipticity” and related concepts introduced by Kogoj and Lanconelli in [KL00], we show a Maximum Principle for linear second order differential operators for which we only assume a Sobolev-type inequality together with a lower terms summability. Adding some crucial hypotheses on measure and on vector fields (Doubling property and Poincar´e inequality), we will be able to obtain some Liouville-type results. This chapter is based on the paper [GL03] by Guti´errez and Lanconelli. In the second chapter we treat some ultraparabolic equations on Lie groups. In this case RN is the support of a Lie group, and moreover we require that vector fields satisfy left invariance. After recalling some results of Cinti [Cin07] about this class of operators and associated potential theory, we prove a scalar convexity for mean-value operators of L-subharmonic functions, where L is our differential operator. In the third chapter we prove a necessary and sufficient condition of regularity, for boundary points, for Dirichlet problem on an open subset of RN related to sub-Laplacian. On a Carnot group we give the essential background for this type of operator, and introduce the notion of “quasi-boundedness”. Then we show the strict relationship between this notion, the fundamental solution of the given operator, and the regularity of the boundary points.
Resumo:
The purpose of the work is: define and calculate a factor of collapse related to traditional method to design sheet pile walls. Furthermore, we tried to find the parameters that most influence a finite element model representative of this problem. The text is structured in this way: from chapter 1 to 5, we analyzed a series of arguments which are usefull to understanding the problem, while the considerations mainly related to the purpose of the text are reported in the chapters from 6 to 10. In the first part of the document the following arguments are shown: what is a sheet pile wall, what are the codes to be followed for the design of these structures and what they say, how can be formulated a mathematical model of the soil, some fundamentals of finite element analysis, and finally, what are the traditional methods that support the design of sheet pile walls. In the chapter 6 we performed a parametric analysis, giving an answer to the second part of the purpose of the work. Comparing the results from a laboratory test for a cantilever sheet pile wall in a sandy soil, with those provided by a finite element model of the same problem, we concluded that:in modelling a sandy soil we should pay attention to the value of cohesion that we insert in the model (some programs, like Abaqus, don’t accept a null value for this parameter), friction angle and elastic modulus of the soil, they influence significantly the behavior of the system (structure-soil), others parameters, like the dilatancy angle or the Poisson’s ratio, they don’t seem influence it. The logical path that we followed in the second part of the text is reported here. We analyzed two different structures, the first is able to support an excavation of 4 m, while the second an excavation of 7 m. Both structures are first designed by using the traditional method, then these structures are implemented in a finite element program (Abaqus), and they are pushed to collapse by decreasing the friction angle of the soil. The factor of collapse is the ratio between tangents of the initial friction angle and of the friction angle at collapse. At the end, we performed a more detailed analysis of the first structure, observing that, the value of the factor of collapse is influenced by a wide range of parameters including: the value of the coefficients assumed in the traditional method and by the relative stiffness of the structure-soil system. In the majority of cases, we found that the value of the factor of collapse is between and 1.25 and 2. With some considerations, reported in the text, we can compare the values so far found, with the value of the safety factor proposed by the code (linked to the friction angle of the soil).
Resumo:
The olive oil extraction industry is responsible for the production of high quantities of vegetation waters, represented by the constitutive water of the olive fruit and by the water used during the process. This by-product represent an environmental problem in the olive’s cultivation areas because of its high content of organic matter, with high value of BOD5 and COD. For that reason the disposal of the vegetation water is very difficult and needs a previous depollution. The organic matter of vegetation water mainly consists of polysaccharides, sugars, proteins, organic acids, oil and polyphenols. This last compounds are the principal responsible for the pollution problems, due to their antimicrobial activity, but, at the same time they are well known for their antioxidant properties. The most concentrate phenolic compounds in waters and also in virgin olive oils are secoiridoids like oleuropein, demethyloleuropein and ligstroside derivatives (the dialdehydic form of elenolic acid linked to 3,4-DHPEA, or p-HPEA (3,4-DHPEA-EDA or p-HPEA-EDA) and an isomer of the oleuropein aglycon (3,4-DHPEA-EA). The management of the olive oil vegetation water has been extensively investigated and several different valorisation methods have been proposed, such as the direct use as fertilizer or the transformation by physico-chemical or biological treatments. During the last years researchers focused their interest on the recovery of the phenolic fraction from this waste looking for its exploitation as a natural antioxidant source. At the present only few contributes have been aimed to the utilization for a large scale phenols recovery and further investigations are required for the evaluation of feasibility and costs of the proposed processes. The present PhD thesis reports a preliminary description of a new industrial scale process for the recovery of the phenolic fraction from olive oil vegetation water treated with enzymes, by direct membrane filtration (microfiltration/ultrafiltration with a cut-off of 250 KDa, ultrafiltration with a cut-off of 7 KDa/10 KDa and nanofiltration/reverse osmosis), partial purification by the use of a purification system based on SPE analysis and by a liquid-liquid extraction system (LLE) with contemporary reduction of the pollution related problems. The phenolic fractions of all the samples obtained were qualitatively and quantitatively by HPLC analysis. The work efficiency in terms of flows and in terms of phenolic recovery gave good results. The final phenolic recovery is about 60% respect the initial content in the vegetation waters. The final concentrate has shown a high content of phenols that allow to hypothesize a possible use as zootechnic nutritional supplements. The purification of the final concentrate have garanteed an high purity level of the phenolic extract especially in SPE analysis by the use of XAD-16 (73% of the total phenolic content of the concentrate). This purity level could permit a future food industry employment such as food additive, or, thanks to the strong antioxidant activity, it would be also use in pharmaceutical or cosmetic industry. The vegetation water depollutant activity has brought good results, as a matter of fact the final reverse osmosis permeate has a low pollutant rate in terms of COD and BOD5 values (2% of the initial vegetation water), that could determinate a recycling use in the virgin olive oil mechanical extraction system producing a water saving and reducing thus the oil industry disposal costs .
Resumo:
Die vorliegende Arbeit ist motiviert durch biologische Fragestellungen bezüglich des Verhaltens von Membranpotentialen in Neuronen. Ein vielfach betrachtetes Modell für spikende Neuronen ist das Folgende. Zwischen den Spikes verhält sich das Membranpotential wie ein Diffusionsprozess X der durch die SDGL dX_t= beta(X_t) dt+ sigma(X_t) dB_t gegeben ist, wobei (B_t) eine Standard-Brown'sche Bewegung bezeichnet. Spikes erklärt man wie folgt. Sobald das Potential X eine gewisse Exzitationsschwelle S überschreitet entsteht ein Spike. Danach wird das Potential wieder auf einen bestimmten Wert x_0 zurückgesetzt. In Anwendungen ist es manchmal möglich, einen Diffusionsprozess X zwischen den Spikes zu beobachten und die Koeffizienten der SDGL beta() und sigma() zu schätzen. Dennoch ist es nötig, die Schwellen x_0 und S zu bestimmen um das Modell festzulegen. Eine Möglichkeit, dieses Problem anzugehen, ist x_0 und S als Parameter eines statistischen Modells aufzufassen und diese zu schätzen. In der vorliegenden Arbeit werden vier verschiedene Fälle diskutiert, in denen wir jeweils annehmen, dass das Membranpotential X zwischen den Spikes eine Brown'sche Bewegung mit Drift, eine geometrische Brown'sche Bewegung, ein Ornstein-Uhlenbeck Prozess oder ein Cox-Ingersoll-Ross Prozess ist. Darüber hinaus beobachten wir die Zeiten zwischen aufeinander folgenden Spikes, die wir als iid Treffzeiten der Schwelle S von X gestartet in x_0 auffassen. Die ersten beiden Fälle ähneln sich sehr und man kann jeweils den Maximum-Likelihood-Schätzer explizit angeben. Darüber hinaus wird, unter Verwendung der LAN-Theorie, die Optimalität dieser Schätzer gezeigt. In den Fällen OU- und CIR-Prozess wählen wir eine Minimum-Distanz-Methode, die auf dem Vergleich von empirischer und wahrer Laplace-Transformation bezüglich einer Hilbertraumnorm beruht. Wir werden beweisen, dass alle Schätzer stark konsistent und asymptotisch normalverteilt sind. Im letzten Kapitel werden wir die Effizienz der Minimum-Distanz-Schätzer anhand simulierter Daten überprüfen. Ferner, werden Anwendungen auf reale Datensätze und deren Resultate ausführlich diskutiert.
Resumo:
In the last few years the resolution of numerical weather prediction (nwp) became higher and higher with the progresses of technology and knowledge. As a consequence, a great number of initial data became fundamental for a correct initialization of the models. The potential of radar observations has long been recognized for improving the initial conditions of high-resolution nwp models, while operational application becomes more frequent. The fact that many nwp centres have recently taken into operations convection-permitting forecast models, many of which assimilate radar data, emphasizes the need for an approach to providing quality information which is needed in order to avoid that radar errors degrade the model's initial conditions and, therefore, its forecasts. Environmental risks can can be related with various causes: meteorological, seismical, hydrological/hydraulic. Flash floods have horizontal dimension of 1-20 Km and can be inserted in mesoscale gamma subscale, this scale can be modeled only with nwp model with the highest resolution as the COSMO-2 model. One of the problems of modeling extreme convective events is related with the atmospheric initial conditions, in fact the scale dimension for the assimilation of atmospheric condition in an high resolution model is about 10 Km, a value too high for a correct representation of convection initial conditions. Assimilation of radar data with his resolution of about of Km every 5 or 10 minutes can be a solution for this problem. In this contribution a pragmatic and empirical approach to deriving a radar data quality description is proposed to be used in radar data assimilation and more specifically for the latent heat nudging (lhn) scheme. Later the the nvective capabilities of the cosmo-2 model are investigated through some case studies. Finally, this work shows some preliminary experiments of coupling of a high resolution meteorological model with an Hydrological one.
Resumo:
This thesis, after presenting recent advances obtained for the two-dimensional bin packing problem, focuses on the case where guillotine restrictions are imposed. A mathematical characterization of non-guillotine patterns is provided and the relation between the solution value of the two-dimensional problem with guillotine restrictions and the two-dimensional problem unrestricted is being studied from a worst-case perspective. Finally it presents a new heuristic algorithm, for the two-dimensional problem with guillotine restrictions, based on partial enumeration, and computationally evaluates its performance on a large set of instances from the literature. Computational experiments show that the algorithm is able to produce proven optimal solutions for a large number of problems, and gives a tight approximation of the optimum in the remaining cases.
Resumo:
The use of linear programming in various areas has increased with the significant improvement of specialized solvers. Linear programs are used as such to model practical problems, or as subroutines in algorithms such as formal proofs or branch-and-cut frameworks. In many situations a certified answer is needed, for example the guarantee that the linear program is feasible or infeasible, or a provably safe bound on its objective value. Most of the available solvers work with floating-point arithmetic and are thus subject to its shortcomings such as rounding errors or underflow, therefore they can deliver incorrect answers. While adequate for some applications, this is unacceptable for critical applications like flight controlling or nuclear plant management due to the potential catastrophic consequences. We propose a method that gives a certified answer whether a linear program is feasible or infeasible, or returns unknown'. The advantage of our method is that it is reasonably fast and rarely answers unknown'. It works by computing a safe solution that is in some way the best possible in the relative interior of the feasible set. To certify the relative interior, we employ exact arithmetic, whose use is nevertheless limited in general to critical places, allowing us to rnremain computationally efficient. Moreover, when certain conditions are fulfilled, our method is able to deliver a provable bound on the objective value of the linear program. We test our algorithm on typical benchmark sets and obtain higher rates of success compared to previous approaches for this problem, while keeping the running times acceptably small. The computed objective value bounds are in most of the cases very close to the known exact objective values. We prove the usability of the method we developed by additionally employing a variant of it in a different scenario, namely to improve the results of a Satisfiability Modulo Theories solver. Our method is used as a black box in the nodes of a branch-and-bound tree to implement conflict learning based on the certificate of infeasibility for linear programs consisting of subsets of linear constraints. The generated conflict clauses are in general small and give good rnprospects for reducing the search space. Compared to other methods we obtain significant improvements in the running time, especially on the large instances.
Resumo:
The prognosis of patients in whom pulmonary embolism (PE) is suspected but ruled out is poorly understood. We evaluated whether the initial assessment of clinical probability of PE could help to predict the prognosis for these patients.
Resumo:
The diagnostic performance of isolated high-grade prostatic intraepithelial neoplasia in prostatic biopsies has recently been questioned, and molecular analysis of high-grade prostatic intraepithelial neoplasia has been proposed for improved prediction of prostate cancer. Here, we retrospectively studied the value of isolated high-grade prostatic intraepithelial neoplasia and the immunohistochemical markers ?-methylacyl coenzyme A racemase, Bcl-2, annexin II, and Ki-67 for better risk stratification of high-grade prostatic intraepithelial neoplasia in our local Swiss population. From an initial 165 diagnoses of isolated high-grade prostatic intraepithelial neoplasia, we refuted 61 (37%) after consensus expert review. We used 30 reviewed high-grade prostatic intraepithelial neoplasia cases with simultaneous biopsy prostate cancer as positive controls. Rebiopsies were performed in 66 patients with isolated high-grade prostatic intraepithelial neoplasia, and the median time interval between initial and repeat biopsy was 3 months. Twenty (30%) of the rebiopsies were positive for prostate cancer, and 10 (15%) showed persistent isolated high-grade prostatic intraepithelial neoplasia. Another 2 (3%) of the 66 patients were diagnosed with prostate cancer in a second rebiopsy. Mean prostate-specific antigen serum levels did not significantly differ between the 22 patients with prostate cancer and the 44 without prostate cancer in rebiopsies, and the 30 positive control patients, respectively (median values, 8.1, 7.7, and 8.8 ng/mL). None of the immunohistochemical markers, including ?-methylacyl coenzyme A racemase, Bcl-2, annexin II, and Ki-67, revealed a statistically significant association with the risk of prostate cancer in repeat biopsies. Taken together, the 33% risk of being diagnosed with prostate cancer after a diagnosis of high-grade prostatic intraepithelial neoplasia justifies rebiopsy, at least in our not systematically prostate-specific antigen-screened population. There is not enough evidence that immunohistochemical markers can reproducibly stratify the risk of prostate cancer after a diagnosis of isolated high-grade prostatic intraepithelial neoplasia.
Resumo:
SETTING: Correctional settings and remand prisons. OBJECTIVE: To critically discuss calculations for epidemiological indicators of the tuberculosis (TB) burden in prisons and to provide recommendations to improve study comparability. METHODS: A hypothetical data set illustrates issues in determining incidence and prevalence. The appropriate calculation of the incidence rate is presented and problems arising from cross-sectional surveys are clarifi ed. RESULTS: Cases recognized during the fi rst 3 months should be classifi ed as prevalent at entry and excluded from any incidence rate calculation. The numerator for the incidence rate includes persons detected as having developed TB during a specifi ed period of time subsequent to the initial 3 months. The denominator is persontime at risk from 3 months onward to the end point (TB or end of the observation period). Preferably, entry time, exit time and event time are known for each inmate to determine person-time at risk. Failing that, an approximation consists of the sum of monthly head counts, excluding prevalent cases and those persons no longer at risk from both the numerator and the denominator. CONCLUSIONS: The varying durations of inmate incarceration in prisons pose challenges for quantifying the magnitude of the TB problem in the inmate population. Recommendations are made to measure incidence and prevalence.