917 resultados para Inverse Problem in Optics
Resumo:
Glasses containing metallic nanoparticles are promising materials for technological applications in optics and photonics. Although several methods are available to generate nanoparticles in glass, only femtosecond lasers allow controlling it three-dimensionally. In this direction, the present work investigates the generation of copper nanoparticles on the surface and in the bulk of a borosilicate glass by fs-laser irradiation. We verified the formation of copper nanoparticles, after heat treatment, by UV-Vis absorption, transmission electron microscopy and electron diffraction. A preferential growth of copper nanoparticles was observed in the bottom of the irradiated region, which was attributed to self-focusing in the glass. (c) 2012 Optical Society of America
Resumo:
We propose an integral formulation of the equations of motion of a large class of field theories which leads in a quite natural and direct way to the construction of conservation laws. The approach is based on generalized non-abelian Stokes theorems for p-form connections, and its appropriate mathematical language is that of loop spaces. The equations of motion are written as the equality of a hyper-volume ordered integral to a hyper-surface ordered integral on the border of that hyper-volume. The approach applies to integrable field theories in (1 + 1) dimensions, Chern-Simons theories in (2 + 1) dimensions, and non-abelian gauge theories in (2 + 1) and (3 + 1) dimensions. The results presented in this paper are relevant for the understanding of global properties of those theories. As a special byproduct we solve a long standing problem in (3 + 1)-dimensional Yang-Mills theory, namely the construction of conserved charges, valid for any solution, which are invariant under arbitrary gauge transformations. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
A deep theoretical analysis of the graph cut image segmentation framework presented in this paper simultaneously translates into important contributions in several directions. The most important practical contribution of this work is a full theoretical description, and implementation, of a novel powerful segmentation algorithm, GC(max). The output of GC(max) coincides with a version of a segmentation algorithm known as Iterative Relative Fuzzy Connectedness, IRFC. However, GC(max) is considerably faster than the classic IRFC algorithm, which we prove theoretically and show experimentally. Specifically, we prove that, in the worst case scenario, the GC(max) algorithm runs in linear time with respect to the variable M=|C|+|Z|, where |C| is the image scene size and |Z| is the size of the allowable range, Z, of the associated weight/affinity function. For most implementations, Z is identical to the set of allowable image intensity values, and its size can be treated as small with respect to |C|, meaning that O(M)=O(|C|). In such a situation, GC(max) runs in linear time with respect to the image size |C|. We show that the output of GC(max) constitutes a solution of a graph cut energy minimization problem, in which the energy is defined as the a"" (a) norm ayenF (P) ayen(a) of the map F (P) that associates, with every element e from the boundary of an object P, its weight w(e). This formulation brings IRFC algorithms to the realm of the graph cut energy minimizers, with energy functions ayenF (P) ayen (q) for qa[1,a]. Of these, the best known minimization problem is for the energy ayenF (P) ayen(1), which is solved by the classic min-cut/max-flow algorithm, referred to often as the Graph Cut algorithm. We notice that a minimization problem for ayenF (P) ayen (q) , qa[1,a), is identical to that for ayenF (P) ayen(1), when the original weight function w is replaced by w (q) . Thus, any algorithm GC(sum) solving the ayenF (P) ayen(1) minimization problem, solves also one for ayenF (P) ayen (q) with qa[1,a), so just two algorithms, GC(sum) and GC(max), are enough to solve all ayenF (P) ayen (q) -minimization problems. We also show that, for any fixed weight assignment, the solutions of the ayenF (P) ayen (q) -minimization problems converge to a solution of the ayenF (P) ayen(a)-minimization problem (ayenF (P) ayen(a)=lim (q -> a)ayenF (P) ayen (q) is not enough to deduce that). An experimental comparison of the performance of GC(max) and GC(sum) algorithms is included. This concentrates on comparing the actual (as opposed to provable worst scenario) algorithms' running time, as well as the influence of the choice of the seeds on the output.
Resumo:
The definition of the sample size is a major problem in studies of phytosociology. The species accumulation curve is used to define the sampling sufficiency, but this method presents some limitations such as the absence of a stabilization point that can be objectively determined and the arbitrariness of the order of sampling units in the curve. A solution to this problem is the use of randomization procedures, e. g. permutation, for obtaining a mean species accumulation curve and empiric confidence intervals. However, the randomization process emphasizes the asymptotical character of the curve. Moreover, the inexistence of an inflection point in the curve makes it impossible to define objectively the point of optimum sample size.
Resumo:
This paper discusses the power allocation with fixed rate constraint problem in multi-carrier code division multiple access (MC-CDMA) networks, that has been solved through game theoretic perspective by the use of an iterative water-filling algorithm (IWFA). The problem is analyzed under various interference density configurations, and its reliability is studied in terms of solution existence and uniqueness. Moreover, numerical results reveal the approach shortcoming, thus a new method combining swarm intelligence and IWFA is proposed to make practicable the use of game theoretic approaches in realistic MC-CDMA systems scenarios. The contribution of this paper is twofold: (i) provide a complete analysis for the existence and uniqueness of the game solution, from simple to more realist and complex interference scenarios; (ii) propose a hybrid power allocation optimization method combining swarm intelligence, game theory and IWFA. To corroborate the effectiveness of the proposed method, an outage probability analysis in realistic interference scenarios, and a complexity comparison with the classical IWFA are presented. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
The use of laser light to modify the material's surface or bulk as well as to induce changes in the volume through a chemical reaction has received great attention in the last few years, due to the possibility of tailoring the material's properties aiming at technological applications. Here, we report on recent progress of microstructuring and microfabrication in polymeric materials by using femtosecond lasers. In the first part, we describe how polymeric materials' micromachining, either on the surface or bulk, can be employed to change their optical and chemical properties promising for fabricating waveguides, resonators, and self-cleaning surfaces. In the second part, we discuss how two-photon absorption polymerization can be used to fabricate active microstructures by doping the basic resin with molecules presenting biological and optical properties of interest. Such microstructures can be used to fabricate devices with applications in optics, such as microLED, waveguides, and also in medicine, such as scaffolds for tissue growth.
Resumo:
Problem In this study, we explored the relationship between decidual cells (DC) and interferon (IFN)-gamma, in the presence or absence of ectoplacental cone (EC) using a coculture system. Method of study Decidual cells and EC were isolated from pregnant mice on gestation day 7.5. DCs were cultured for 48 hr and then treated with fresh EC. After characterization, they were treated with IFN-gamma, and cell death was evaluated. Results Interferon-gamma drastically increased decidual apoptosis, which was partially reverted by the addition of EC to the IFN-gamma-treated decidual culture. Moreover, the addition of EC to non-treated DC cultures was also capable of attenuating death rates. Conclusion Resistance to apoptosis may be induced in DC by the EC. This suggests that EC may participate in the inhibition of IFN-gamma-dependent apoptosis and, therefore, play important role for DC survival in a cytokineenriched placental environment.
Resumo:
A total of 8,058 male and female mixed-breed goats and 1-4 years of age were slaughtered over a period of 7 months at the public slaughterhouse of Patos city, Paraíba state, in the Northeast region of Brazil; 822 animals were inspected for gross lesions of tuberculosis, and 12 (1.46%) had lesions suggestive of tuberculosis in the mammary gland, lungs, liver and mediastinal, mesenteric, submandibular, parotid and prescapular lymph nodes. Presence of granulomatous lesions was confirmed in the submandibular lymph node of one (8.3%) goat at the histopathological examination and at the mycobacterium culture the same sample was confirmed positive. Isolate was confirmed as belonging to the M. tuberculosis complex by PCR restriction enzyme analysis (PRA). Spoligotyping identified the isolate into spoligotype SB0295 on the M. bovis Spoligotype Database website (www.mbovis.org), and it was classified as M. bovis. The occurrence of M. bovis in goats in this study suggests that this species may be a potential source of infection for humans and should be regarded as a possible problem in the advancement of control and eradication program for bovine tuberculosis in Brazil.
Resumo:
The importance of mechanical aspects related to cell activity and its environment is becoming more evident due to their influence in stem cell differentiation and in the development of diseases such as atherosclerosis. The mechanical tension homeostasis is related to normal tissue behavior and its lack may be related to the formation of cancer, which shows a higher mechanical tension. Due to the complexity of cellular activity, the application of simplified models may elucidate which factors are really essential and which have a marginal effect. The development of a systematic method to reconstruct the elements involved in the perception of mechanical aspects by the cell may accelerate substantially the validation of these models. This work proposes the development of a routine capable of reconstructing the topology of focal adhesions and the actomyosin portion of the cytoskeleton from the displacement field generated by the cell on a flexible substrate. Another way to think of this problem is to develop an algorithm to reconstruct the forces applied by the cell from the measurements of the substrate displacement, which would be characterized as an inverse problem. For these kind of problems, the Topology Optimization Method (TOM) is suitable to find a solution. TOM is consisted of an iterative application of an optimization method and an analysis method to obtain an optimal distribution of material in a fixed domain. One way to experimentally obtain the substrate displacement is through Traction Force Microscopy (TFM), which also provides the forces applied by the cell. Along with systematically generating the distributions of focal adhesion and actin-myosin for the validation of simplified models, the algorithm also represents a complementary and more phenomenological approach to TFM. As a first approximation, actin fibers and flexible substrate are represented through two-dimensional linear Finite Element Method. Actin contraction is modeled as an initial stress of the FEM elements. Focal adhesions connecting actin and substrate are represented by springs. The algorithm was applied to data obtained from experiments regarding cytoskeletal prestress and micropatterning, comparing the numerical results to the experimental ones
Resumo:
Drug dependence is a major health problem in adults and has been recognized as a significant problem in adolescents. We previously demonstrated that repeated treatment with a behaviorally sensitizing dose of ethanol in adult mice induced tolerance or no sensitization in adolescents and that repeated ethanol-treated adolescents expressed lower Fos and Egr-1 expression than adult mice in the prefrontal cortex (PFC). In the present work, we investigated the effects of acute and repeated ethanol administration on cyclic adenosine monophosphate (cAMP) response element-binding protein (CREB) DNA-binding activity using the electrophoretic mobility shift assay (EMSA) and the phosphorylated CREB (pCREB)/CREB ratio using immunoblotting in both the PFC and hippocampus in adolescent and adult mice. Adult mice exhibited typical locomotor sensitization after 15 days of daily treatment with 2.0 g/kg ethanol, whereas adolescent mice did not exhibit sensitization. Overall, adolescent mice displayed lower CREB binding activity in the PFC compared with adult mice, whereas opposite effects were observed in the hippocampus. The present results indicate that ethanol exposure induces significant and differential neuroadaptive changes in CREB DNA-binding activity in the PFC and hippocampus in adolescent mice compared with adult mice. These differential molecular changes may contribute to the blunted ethanol-induced behavioral sensitization observed in adolescent mice.
Resumo:
Using a mathematical approach accessible to graduate students of physics and engineering, we show how solitons are solutions of nonlinear Schrödinger equations. Are also given references about the history of solitons in general, their fundamental properties and how they have found applications in optics and fiber-optic communications.
Resumo:
This thesis is based on five papers addressing variance reduction in different ways. The papers have in common that they all present new numerical methods. Paper I investigates quantitative structure-retention relationships from an image processing perspective, using an artificial neural network to preprocess three-dimensional structural descriptions of the studied steroid molecules. Paper II presents a new method for computing free energies. Free energy is the quantity that determines chemical equilibria and partition coefficients. The proposed method may be used for estimating, e.g., chromatographic retention without performing experiments. Two papers (III and IV) deal with correcting deviations from bilinearity by so-called peak alignment. Bilinearity is a theoretical assumption about the distribution of instrumental data that is often violated by measured data. Deviations from bilinearity lead to increased variance, both in the data and in inferences from the data, unless invariance to the deviations is built into the model, e.g., by the use of the method proposed in paper III and extended in paper IV. Paper V addresses a generic problem in classification; namely, how to measure the goodness of different data representations, so that the best classifier may be constructed. Variance reduction is one of the pillars on which analytical chemistry rests. This thesis considers two aspects on variance reduction: before and after experiments are performed. Before experimenting, theoretical predictions of experimental outcomes may be used to direct which experiments to perform, and how to perform them (papers I and II). After experiments are performed, the variance of inferences from the measured data are affected by the method of data analysis (papers III-V).
Resumo:
[EN] We propose four algorithms for computing the inverse optical flow between two images. We assume that the forward optical flow has already been obtained and we need to estimate the flow in the backward direction. The forward and backward flows can be related through a warping formula, which allows us to propose very efficient algorithms. These are presented in increasing order of complexity. The proposed methods provide high accuracy with low memory requirements and low running times.In general, the processing reduces to one or two image passes. Typically, when objects move in a sequence, some regions may appear or disappear. Finding the inverse flows in these situations is difficult and, in some cases, it is not possible to obtain a correct solution. Our algorithms deal with occlusions very easy and reliably. On the other hand, disocclusions have to be overcome as a post-processing step. We propose three approaches for filling disocclusions. In the experimental results, we use standard synthetic sequences to study the performance of the proposed methods, and show that they yield very accurate solutions. We also analyze the performance of the filling strategies.
Resumo:
Trabajo realizado por: Maldonado, F.; Packard, T.; Gómez, M.; Santana Rodríguez, J. J
Resumo:
[EN] We analyze the discontinuity preserving problem in TV-L1 optical flow methods. This type of methods typically creates rounded effects at flow boundaries, which usually do not coincide with object contours. A simple strategy to overcome this problem consists in inhibiting the diffusion at high image gradients. In this work, we first introduce a general framework for TV regularizers in optical flow and relate it with some standard approaches. Our survey takes into account several methods that use decreasing functions for mitigating the diffusion at image contours. Consequently, this kind of strategies may produce instabilities in the estimation of the optical flows. Hence, we study the problem of instabilities and show that it actually arises from an ill-posed formulation. From this study, it is possible to come across with different schemes to solve this problem. One of these consists in separating the pure TV process from the mitigating strategy. This has been used in another work and we demonstrate here that it has a good performance. Furthermore, we propose two alternatives to avoid the instability problems: (i) we study a fully automatic approach that solves the problem based on the information of the whole image; (ii) we derive a semi-automatic approach that takes into account the image gradients in a close neighborhood adapting the parameter in each position. In the experimental results, we present a detailed study and comparison between the different alternatives. These methods provide very good results, especially for sequences with a few dominant gradients. Additionally, a surprising effect of these approaches is that they can cope with occlusions. This can be easily achieved by using strong regularizations and high penalizations at image contours.