977 resultados para Problem Resolution
Resumo:
[EN] Diabetic foot ulcers (DFUs) represent a major clinical challenge in the ageing population. To address this problem, rhEGF-loaded Poly-Lactic-co-Glycolic-Acid (PLGA)-Alginate microspheres (MS) were prepared by a modified w/o/w-doubleemulsion/ solvent evaporation method. Different formulations were evaluated with the aim of optimising MSs properties by adding NaCl to the surfactant solution and/or the solvent removal phase and adding alginate as a second polymer. The characterization of the developed MS showed that alginate incorporation increased the encapsulation efficiency (EE) and NaCl besides increasing the EE also became the particle surface smooth and regular. Once the MS were optimised, the target loading of rhEGF was increased to 1% (PLGA-Alginate MS), and particles were sterilised by gamma radiation to provide the correct dosage for in vivo studies. In vitro cell culture assays demonstrated that neither the microencapsulation nor the sterilisation process affected rhEGF bioactivity or rhEGF wound contraction. Finally, the MS were evaluated in vivo for treatment of the full-thickness wound model in diabetised Wistar rats. rhEGF MS treated animals showed a statistically significant decrease of the wound area by days 7 and 11, a complete re-epithelisation by day 11 and an earlier resolution of the inflammatory process. Overall, these findings demonstrate the promising potential of rhEGF-loaded MS (PLGA-Alginate MS) to promote faster and more effective wound healing, and suggest its possible application in DFU treatment.
Resumo:
The aim of this paper is to propose a new solution for the roommate problem with strict preferences. We introduce the solution of maximum irreversibility and consider almost stable matchings (Abraham et al. [2])and maximum stable matchings (Ta [30] [32]). We find that almost stable matchings are incompatible with the other two solutions. Hence, to solve the roommate problem we propose matchings that lie at the intersection of the maximum irreversible matchings and maximum stable matchings, which are called Q-stable matchings. These matchings are core consistent and we offer an effi cient algorithm for computing one of them. The outcome of the algorithm belongs to an absorbing set.
Resumo:
Abstract to Part I
The inverse problem of seismic wave attenuation is solved by an iterative back-projection method. The seismic wave quality factor, Q, can be estimated approximately by inverting the S-to-P amplitude ratios. Effects of various uncertain ties in the method are tested and the attenuation tomography is shown to be useful in solving for the spatial variations in attenuation structure and in estimating the effective seismic quality factor of attenuating anomalies.
Back-projection attenuation tomography is applied to two cases in southern California: Imperial Valley and the Coso-Indian Wells region. In the Coso-Indian Wells region, a highly attenuating body (S-wave quality factor (Q_β ≈ 30) coincides with a slow P-wave anomaly mapped by Walck and Clayton (1987). This coincidence suggests the presence of a magmatic or hydrothermal body 3 to 5 km deep in the Indian Wells region. In the Imperial Valley, slow P-wave travel-time anomalies and highly attenuating S-wave anomalies were found in the Brawley seismic zone at a depth of 8 to 12 km. The effective S-wave quality factor is very low (Q_β ≈ 20) and the P-wave velocity is 10% slower than the surrounding areas. These results suggest either magmatic or hydrothermal intrusions, or fractures at depth, possibly related to active shear in the Brawley seismic zone.
No-block inversion is a generalized tomographic method utilizing the continuous form of an inverse problem. The inverse problem of attenuation can be posed in a continuous form , and the no-block inversion technique is applied to the same data set used in the back-projection tomography. A relatively small data set with little redundancy enables us to apply both techniques to a similar degree of resolution. The results obtained by the two methods are very similar. By applying the two methods to the same data set, formal errors and resolution can be directly computed for the final model, and the objectivity of the final result can be enhanced.
Both methods of attenuation tomography are applied to a data set of local earthquakes in Kilauea, Hawaii, to solve for the attenuation structure under Kilauea and the East Rift Zone. The shallow Kilauea magma chamber, East Rift Zone and the Mauna Loa magma chamber are delineated as attenuating anomalies. Detailed inversion reveals shallow secondary magma reservoirs at Mauna Ulu and Puu Oo, the present sites of volcanic eruptions. The Hilina Fault zone is highly attenuating, dominating the attenuating anomalies at shallow depths. The magma conduit system along the summit and the East Rift Zone of Kilauea shows up as a continuous supply channel extending down to a depth of approximately 6 km. The Southwest Rift Zone, on the other hand, is not delineated by attenuating anomalies, except at a depth of 8-12 km, where an attenuating anomaly is imaged west of Puu Kou. The Ylauna Loa chamber is seated at a deeper level (about 6-10 km) than the Kilauea magma chamber. Resolution in the Mauna Loa area is not as good as in the Kilauea area, and there is a trade-off between the depth extent of the magma chamber imaged under Mauna Loa and the error that is due to poor ray coverage. Kilauea magma chamber, on the other hand, is well resolved, according to a resolution test done at the location of the magma chamber.
Abstract to Part II
Long period seismograms recorded at Pasadena of earthquakes occurring along a profile to Imperial Valley are studied in terms of source phenomena (e.g., source mechanisms and depths) versus path effects. Some of the events have known source parameters, determined by teleseismic or near-field studies, and are used as master events in a forward modeling exercise to derive the Green's functions (SH displacements at Pasadena that are due to a pure strike-slip or dip-slip mechanism) that describe the propagation effects along the profile. Both timing and waveforms of records are matched by synthetics calculated from 2-dimensional velocity models. The best 2-dimensional section begins at Imperial Valley with a thin crust containing the basin structure and thickens towards Pasadena. The detailed nature of the transition zone at the base of the crust controls the early arriving shorter periods (strong motions), while the edge of the basin controls the scattered longer period surface waves. From the waveform characteristics alone, shallow events in the basin are easily distinguished from deep events, and the amount of strike-slip versus dip-slip motion is also easily determined. Those events rupturing the sediments, such as the 1979 Imperial Valley earthquake, can be recognized easily by a late-arriving scattered Love wave that has been delayed by the very slow path across the shallow valley structure.
Resumo:
The problem of "exit against a flow" for dynamical systems subject to small Gaussian white noise excitation is studied. Here the word "flow" refers to the behavior in phase space of the unperturbed system's state variables. "Exit against a flow" occurs if a perturbation causes the phase point to leave a phase space region within which it would normally be confined. In particular, there are two components of the problem of exit against a flow:
i) the mean exit time
ii) the phase-space distribution of exit locations.
When the noise perturbing the dynamical systems is small, the solution of each component of the problem of exit against a flow is, in general, the solution of a singularly perturbed, degenerate elliptic-parabolic boundary value problem.
Singular perturbation techniques are used to express the asymptotic solution in terms of an unknown parameter. The unknown parameter is determined using the solution of the adjoint boundary value problem.
The problem of exit against a flow for several dynamical systems of physical interest is considered, and the mean exit times and distributions of exit positions are calculated. The systems are then simulated numerically, using Monte Carlo techniques, in order to determine the validity of the asymptotic solutions.
Resumo:
Measuring electrical activity in large numbers of cells with high spatial and temporal resolution is a fundamental problem for the study of neural development and information processing. To address this problem, we have constructed FlaSh: a novel, genetically-encoded probe that can be used to measure trans-membrane voltage in single cells. We fused a modified green fluorescent protein (GFP) into a voltage-sensitive potassium channel so that voltage dependent rearrangements in the potassium channel induce changes in the fluorescence of GFP. A voltage sensor encoded into DNA has the advantage that it may be introduced into an organism non-invasively and targeted to specific developmental stages, brain regions, cell types, and sub-cellular compartments.
We also describe modifications to FlaSh that shift its color, kinetics, and dynamic range. We used multiple green fluorescent proteins to produce variants of the FlaSh sensor that generate ratiometric signal output via fluorescence resonance energy transfer (FRET). Finally, we describe initial work toward FlaSh variants that are sensitive to G-protein coupled receptor (GPCR) activation. These sensors can be used to design functional assays for receptor activation in living cells.
Resumo:
We consider the following singularly perturbed linear two-point boundary-value problem:
Ly(x) ≡ Ω(ε)D_xy(x) - A(x,ε)y(x) = f(x,ε) 0≤x≤1 (1a)
By ≡ L(ε)y(0) + R(ε)y(1) = g(ε) ε → 0^+ (1b)
Here Ω(ε) is a diagonal matrix whose first m diagonal elements are 1 and last m elements are ε. Aside from reasonable continuity conditions placed on A, L, R, f, g, we assume the lower right mxm principle submatrix of A has no eigenvalues whose real part is zero. Under these assumptions a constructive technique is used to derive sufficient conditions for the existence of a unique solution of (1). These sufficient conditions are used to define when (1) is a regular problem. It is then shown that as ε → 0^+ the solution of a regular problem exists and converges on every closed subinterval of (0,1) to a solution of the reduced problem. The reduced problem consists of the differential equation obtained by formally setting ε equal to zero in (1a) and initial conditions obtained from the boundary conditions (1b). Several examples of regular problems are also considered.
A similar technique is used to derive the properties of the solution of a particular difference scheme used to approximate (1). Under restrictions on the boundary conditions (1b) it is shown that for the stepsize much larger than ε the solution of the difference scheme, when applied to a regular problem, accurately represents the solution of the reduced problem.
Furthermore, the existence of a similarity transformation which block diagonalizes a matrix is presented as well as exponential bounds on certain fundamental solution matrices associated with the problem (1).
Resumo:
Unremitting waves and occasional storms bring dynamic forces to bear on the coast. Sediment flux results in various patterns of erosion and accretion, with an overwhelming majority (80 to 90 percent) of coastline in the eastern U.S. exhibiting net erosion in recent decades. Climate change threatens to increase the intensity of storms and raise sea level 18 to 59 centimeters over the next century. Following a lengthy tradition of economic models for natural resource management, this paper provides a dynamic optimization model for managing coastal erosion and explores the types of data necessary to employ the model for normative policy analysis. The model conceptualizes benefits of beach and dune sediments as service flows accruing to nearby residential property owners, local businesses, recreational beach users, and perhaps others. Benefits can also include improvements in habitat for beach- and dune-dependent plant and animal species. The costs of maintaining beach sediment in the presence of coastal erosion include expenditures on dredging, pumping, and placing sand on the beach to maintain width and height. Other costs can include negative impacts on the nearshore environment. Employing these constructs, an optimal control model is specified that provides a framework for identifying the conditions under which beach replenishment enhances economic welfare and an optimal schedule for replenishment can be derived under a constant sea level and erosion rate (short term) as well as an increasing sea level and erosion rate (long term). Under some simplifying assumptions, the conceptual framework can examine the time horizon of management responses under sea level rise, identifying the timing of shift to passive management (shoreline retreat) and exploring factors that influence this potential shift. (PDF contains 4 pages)
Resumo:
In this work, the results of numerical simulations of X-ray fluorescence holograms and the reconstructed atomic images for Fe single crystal are given. The influences of the recording angles ranges and the polarization effect on the reconstruction of the atomic images are discussed. The process for removing twin images by multiple energy fluorescence holography and expanding the energy range of the incident X-rays to improve the resolution of the reconstructed images is presented. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
The majority of young, low-mass stars are surrounded by optically thick accretion disks. These circumstellar disks provide large reservoirs of gas and dust that will eventually be transformed into planetary systems. Theory and observations suggest that the earliest stage toward planet formation in a protoplanetary disk is the growth of particles, from sub-micron-sized grains to centimeter- sized pebbles. Theory indicates that small interstellar grains are well coupled into the gas and are incorporated to the disk during the proto-stellar collapse. These dust particles settle toward the disk mid-plane and simultaneously grow through collisional coagulation in a very short timescale. Observationally, grain growth can be inferred by measuring the spectral energy distribution at long wavelengths, which traces the continuum dust emission spectrum and hence the dust opacity. Several observational studies have indicated that the dust component in protoplanetary disks has evolved as compared to interstellar medium dust particles, suggesting at least 4 orders of magnitude in particle- size growth. However, the limited angular resolution and poor sensitivity of previous observations has not allowed for further exploration of this astrophysical process.
As part of my thesis, I embarked in an observational program to search for evidence of radial variations in the dust properties across a protoplanetary disk, which may be indicative of grain growth. By making use of high angular resolution observations obtained with CARMA, VLA, and SMA, I searched for radial variations in the dust opacity inside protoplanetary disks. These observations span more than an order of magnitude in wavelength (from sub-millimeter to centimeter wavelengths) and attain spatial resolutions down to 20 AU. I characterized the radial distribution of the circumstellar material and constrained radial variations of the dust opacity spectral index, which may originate from particle growth in these circumstellar disks. Furthermore, I compared these observational constraints with simple physical models of grain evolution that include collisional coagulation, fragmentation, and the interaction of these grains with the gaseous disk (the radial drift problem). For the parameters explored, these observational constraints are in agreement with a population of grains limited in size by radial drift. Finally, I also discuss future endeavors with forthcoming ALMA observations.
Resumo:
Motivated by needs in molecular diagnostics and advances in microfabrication, researchers started to seek help from microfluidic technology, as it provides approaches to achieve high throughput, high sensitivity, and high resolution. One strategy applied in microfluidics to fulfill such requirements is to convert continuous analog signal into digitalized signal. One most commonly used example for this conversion is digital PCR, where by counting the number of reacted compartments (triggered by the presence of the target entity) out of the total number of compartments, one could use Poisson statistics to calculate the amount of input target.
However, there are still problems to be solved and assumptions to be validated before the technology is widely employed. In this dissertation, the digital quantification strategy has been examined from two angles: efficiency and robustness. The former is a critical factor for ensuring the accuracy of absolute quantification methods, and the latter is the premise for such technology to be practically implemented in diagnosis beyond the laboratory. The two angles are further framed into a “fate” and “rate” determination scheme, where the influence of different parameters is attributed to fate determination step or rate determination step. In this discussion, microfluidic platforms have been used to understand reaction mechanism at single molecule level. Although the discussion raises more challenges for digital assay development, it brings the problem to the attention of the scientific community for the first time.
This dissertation also contributes towards developing POC test in limited resource settings. On one hand, it adds ease of access to the tests by incorporating massively producible, low cost plastic material and by integrating new features that allow instant result acquisition and result feedback. On the other hand, it explores new isothermal chemistry and new strategies to address important global health concerns such as cyctatin C quantification, HIV/HCV detection and treatment monitoring as well as HCV genotyping.
Resumo:
[ES]Este proyecto investigador tiene como objetivo el ayudar con la calibración del mecanismo de cinco pares de rotación montado en el taller de Ingeniería Mecánica de la ETSI de Bilbao. En primer lugar se estudiarán los algoritmos de optimización prestando especial atención a la comparativa entre Levenberg-Marquart y Gauss-Newton. Se realizarán estudios en Matlab para concluir cuál de los dos es más eficaz tanto en rapidez como en precisión. El que sea más adecuado se implementará en un programa para la calibración del mecanismo 5R. En segundo lugar se estudiarán los índices de observabilidad. Los estudios que se han realizado sobre ellos hasta ahora son poco concluyentes asique se intentará aclarar su utilidad y determinar cuál es el que conviene utilizar en este caso. Para ello se deberá programar la resolución del problema cinemático inverso. Por último se presentarán los resultados y las conclusiones correspondientes. Se propondrá también un plan de desarrollo de una línea de investigación futura que partirá con este trabajo como base.