901 resultados para omega-invariant
Resumo:
Perceiving the world visually is a basic act for humans, but for computers it is still an unsolved problem. The variability present innatural environments is an obstacle for effective computer vision. The goal of invariant object recognition is to recognise objects in a digital image despite variations in, for example, pose, lighting or occlusion. In this study, invariant object recognition is considered from the viewpoint of feature extraction. Thedifferences between local and global features are studied with emphasis on Hough transform and Gabor filtering based feature extraction. The methods are examined with respect to four capabilities: generality, invariance, stability, and efficiency. Invariant features are presented using both Hough transform and Gabor filtering. A modified Hough transform technique is also presented where the distortion tolerance is increased by incorporating local information. In addition, methods for decreasing the computational costs of the Hough transform employing parallel processing and local information are introduced.
Resumo:
Ruin occurs the first time when the surplus of a company or an institution is negative. In the Omega model, it is assumed that even with a negative surplus, the company can do business as usual until bankruptcy occurs. The probability of bankruptcy at a point of time only depends on the value of the negative surplus at that time. Under the assumption of Brownian motion for the surplus, the expected discounted value of a penalty at bankruptcy is determined, and hence the probability of bankruptcy. There is an intrinsic relation between the probability of no bankruptcy and an exposure random variable. In special cases, the distribution of the total time the Brownian motion spends below zero is found, and the Laplace transform of the integral of the negative part of the Brownian motion is expressed in terms of the Airy function of the first kind.
Resumo:
In two previous papers [J. Differential Equations, 228 (2006), pp. 530 579; Discrete Contin. Dyn. Syst. Ser. B, 6 (2006), pp. 1261 1300] we have developed fast algorithms for the computations of invariant tori in quasi‐periodic systems and developed theorems that assess their accuracy. In this paper, we study the results of implementing these algorithms and study their performance in actual implementations. More importantly, we note that, due to the speed of the algorithms and the theoretical developments about their reliability, we can compute with confidence invariant objects close to the breakdown of their hyperbolicity properties. This allows us to identify a mechanism of loss of hyperbolicity and measure some of its quantitative regularities. We find that some systems lose hyperbolicity because the stable and unstable bundles approach each other but the Lyapunov multipliers remain away from 1. We find empirically that, close to the breakdown, the distances between the invariant bundles and the Lyapunov multipliers which are natural measures of hyperbolicity depend on the parameters, with power laws with universal exponents. We also observe that, even if the rigorous justifications in [J. Differential Equations, 228 (2006), pp. 530-579] are developed only for hyperbolic tori, the algorithms work also for elliptic tori in Hamiltonian systems. We can continue these tori and also compute some bifurcations at resonance which may lead to the existence of hyperbolic tori with nonorientable bundles. We compute manifolds tangent to nonorientable bundles.
Resumo:
We present an algorithm for the computation of reducible invariant tori of discrete dynamical systems that is suitable for tori of dimensions larger than 1. It is based on a quadratically convergent scheme that approximates, at the same time, the Fourier series of the torus, its Floquet transformation, and its Floquet matrix. The Floquet matrix describes the linearization of the dynamics around the torus and, hence, its linear stability. The algorithm presents a high degree of parallelism, and the computational effort grows linearly with the number of Fourier modes needed to represent the solution. For these reasons it is a very good option to compute quasi-periodic solutions with several basic frequencies. The paper includes some examples (flows) to show the efficiency of the method in a parallel computer. In these flows we compute invariant tori of dimensions up to 5, by taking suitable sections.
Resumo:
The carrot leaf dehydration conditions in air circulation oven were optimized through response surface methodology (RSM) for minimizing the degradation of polyunsaturated fatty acids, particularly alpha-linolenic (LNA, 18:3n-3). The optimized leaf drying time and temperature were 43 h and 70 ºC, respectively. The fatty acids (FA) were investigated using gas chromatography equipped with a flame ionization detector and fused silica capillary column; FA were identified with standards and based on equivalent-chain-length. LNA and other FA were quantified against C21:0 internal standard. After dehydration, the amount of LNA, quantified in mg/100 g dry matter of dehydrated carrot leaves, were 984 mg.
Resumo:
The development of correct programs is a core problem in computer science. Although formal verification methods for establishing correctness with mathematical rigor are available, programmers often find these difficult to put into practice. One hurdle is deriving the loop invariants and proving that the code maintains them. So called correct-by-construction methods aim to alleviate this issue by integrating verification into the programming workflow. Invariant-based programming is a practical correct-by-construction method in which the programmer first establishes the invariant structure, and then incrementally extends the program in steps of adding code and proving after each addition that the code is consistent with the invariants. In this way, the program is kept internally consistent throughout its development, and the construction of the correctness arguments (proofs) becomes an integral part of the programming workflow. A characteristic of the approach is that programs are described as invariant diagrams, a graphical notation similar to the state charts familiar to programmers. Invariant-based programming is a new method that has not been evaluated in large scale studies yet. The most important prerequisite for feasibility on a larger scale is a high degree of automation. The goal of the Socos project has been to build tools to assist the construction and verification of programs using the method. This thesis describes the implementation and evaluation of a prototype tool in the context of the Socos project. The tool supports the drawing of the diagrams, automatic derivation and discharging of verification conditions, and interactive proofs. It is used to develop programs that are correct by construction. The tool consists of a diagrammatic environment connected to a verification condition generator and an existing state-of-the-art theorem prover. Its core is a semantics for translating diagrams into verification conditions, which are sent to the underlying theorem prover. We describe a concrete method for 1) deriving sufficient conditions for total correctness of an invariant diagram; 2) sending the conditions to the theorem prover for simplification; and 3) reporting the results of the simplification to the programmer in a way that is consistent with the invariantbased programming workflow and that allows errors in the program specification to be efficiently detected. The tool uses an efficient automatic proof strategy to prove as many conditions as possible automatically and lets the remaining conditions be proved interactively. The tool is based on the verification system PVS and i uses the SMT (Satisfiability Modulo Theories) solver Yices as a catch-all decision procedure. Conditions that were not discharged automatically may be proved interactively using the PVS proof assistant. The programming workflow is very similar to the process by which a mathematical theory is developed inside a computer supported theorem prover environment such as PVS. The programmer reduces a large verification problem with the aid of the tool into a set of smaller problems (lemmas), and he can substantially improve the degree of proof automation by developing specialized background theories and proof strategies to support the specification and verification of a specific class of programs. We demonstrate this workflow by describing in detail the construction of a verified sorting algorithm. Tool-supported verification often has little to no presence in computer science (CS) curricula. Furthermore, program verification is frequently introduced as an advanced and purely theoretical topic that is not connected to the workflow taught in the early and practically oriented programming courses. Our hypothesis is that verification could be introduced early in the CS education, and that verification tools could be used in the classroom to support the teaching of formal methods. A prototype of Socos has been used in a course at Åbo Akademi University targeted at first and second year undergraduate students. We evaluate the use of Socos in the course as part of a case study carried out in 2007.
Resumo:
OBJECTIVE: To evaluate the effect of preoperative supplementation of omega-3 fatty acids on the healing of colonic anastomoses in malnourished rats receiving paclitaxel. METHODS: we studied 160 male Wistar rats, divided in two groups: one subjected to malnutrition by pair feeding (M) for four weeks, and another that received food ad libitum (W). In the fourth week, the groups were further divided into two subgroups that received omega-3 or olive oil by gavage. The animals were submitted to colonic transection and end-to-end anastomosis. After the operation, each of the four groups was divided into two subgroups that received intraperitoneal isovolumetric solutions of saline or paclitaxel. RESULTS: mortality was 26.8% higher in the group of animals that received paclitaxel (p = 0.003). The complete rupture strength was greater in well-nourished-oil Paclitaxel group (WOP) compared with the the malnourished-oil Paclitaxel one (MOP). The collagen maturation index was higher in well-nourished-oil saline group (WOS) in relation to the malnutrition-oil-saline group (MOS), lower in malnourished-oil-saline group (MOS) in relation to malnourished-ômega3-saline one (M3S) and lower in the well-nourished-omega3-saline group (W3S) compared with the malnourished-omega3-saline (M3S). The blood vessel count was higher in the malnourished-oil-saline group (MOS) than in the malnourished-oil-paclitaxel group (MOP) and lower in the malnourished-oil-saline group (MOS) in relation to the malnourished-omega3-paclitaxel group (M3P). CONCLUSION: supplementation with omega-3 fatty acids was associated with a significant increase in the production of mature collagen in malnourished animals, with a reversal of the harmful effects caused by malnutrition associated with the use of paclitaxel on the rupture strength, and with a stimulus to neoangiogenesis in the group receiving paclitaxel.
Resumo:
Objective: to evaluate liver regeneration in rats after partial hepatectomy of 60% with and without action diet supplemented with fatty acids through the study of the regenerated liver weight, laboratory parameters of liver function and histological study. Methods: thirty-six Wistar rats, males, adults were used, weighing between 195 and 330 g assigned to control and groups. The supplementation group received the diet by gavage and were killed after 24h, 72h and seven days. Evaluation of regeneration occurred through analysis of weight gain liver, serum aspartate aminotransferase, alanine aminotransferase, gamma-glutamyltranspeptidase, and mitosis of the liver stained with H&E. Results: the diet supplemented group showed no statistical difference (p>0.05) on the evolution of weights. Administration of fatty acids post-hepatectomy had significant reduction in gamma glutamyltransferase levels and may reflect liver regeneration. Referring to mitotic index, it did not differ between period of times among the groups. Conclusion: supplementation with fatty acids in rats undergoing 60% hepatic resection showed no significant interference related to liver regeneration.
Resumo:
Enantiopure intermediates are of high value in drug synthesis. Biocatalysis alone or combined with chemical synthesis provides powerful tools to access enantiopure compounds. In biocatalysis, chemo-, regio- and enantioselectivity of enzymes are combined with their inherent environmentally benign nature. Enzymes can be applied in versatile chemical reactions with non-natural substrates under synthesis conditions. Immobilization of an enzyme is a crucial part of an efficient biocatalytic synthesis method. Successful immobilization enhances the catalytic performance of an enzyme and enables its reuse in successive reactions. This thesis demonstrates the feasibility of biocatalysis in the preparation of enantiopure secondary alcohols and primary amines. Viability and synthetic usability of the studied biocatalytic methods have been addressed throughout this thesis. Candida antarctica lipase B (CAL-B) catalyzed enantioselective O-acylation of racemic secondary alcohols was successfully incorporated with in situ racemization in the dynamic kinetic resolution, affording the (R)-esters in high yields and enantiopurities. Side reactions causing decrease in yield and enantiopurity were suppressed. CAL-B was also utilized in the solvent-free kinetic resolution of racemic primary amines. This method produced the enantiomers as (R)-amides and (S)-amines under ambient conditions. An in-house sol-gel entrapment increased the reusability of CAL-B. Arthrobacter sp. omega-transaminase was entrapped in sol-gel matrices to obtain a reusable catalyst for the preparation enantiopure primary amines in an aqueous medium. The obtained heterogeneous omega-transaminase catalyst enabled the enantiomeric enrichment of the racemic amines to their (S)-enantiomers. The synthetic usability of the sol-gel catalyst was demonstrated in five successive preparative kinetic resolutions.
Resumo:
The transient receptor potential channels family (TRP channels) is a relatively new group of cation channels that modulate a large range of physiological mechanisms. In the nervous system, the functions of TRP channels have been associated with thermosensation, pain transduction, neurotransmitter release, and redox signaling, among others. However, they have also been extensively correlated with the pathogenesis of several innate and acquired diseases. On the other hand, the omega-3 polyunsaturated fatty acids (n-3 fatty acids) have also been associated with several processes that seem to counterbalance or to contribute to the function of several TRPs. In this short review, we discuss some of the remarkable new findings in this field. We also review the possible roles played by n-3 fatty acids in cell signaling that can both control or be controlled by TRP channels in neurodegenerative processes, as well as both the direct and indirect actions of n-3 fatty acids on TRP channels.
Resumo:
The chemical composition and antioxidant capacity of five seeds, chia, golden flax, brown flax, white perilla, and brown perilla, were determined. The chemical properties analyzed included moisture, ash, crude protein, carbohydrates, total lipids, fatty acids, and antioxidant capacity (ABTS+, DPPH, and FRAP). The results showed the highest amounts of protein and total lipids in brown and white perilla. Perilla and chia showed higher amounts of alpha-linolenic fatty acid than those of flaxseed varieties, ranging between 531.44 mg g-1 of lipids in brown perilla, 539.07 mg g-1 of lipids in white perilla, and 544.85 mg g-1 of lipis in chia seed. The antioxidant capacity of the seeds, evaluated with ABTS+, DPPH , and FRAP methods, showed that brown perilla had greater antioxidant capacity when compared with white perilla, flax, and chia seeds.
Resumo:
This study evaluated the effect of adding flaxseed flour to the diet of Nile tilapia on the fatty acid composition of fillets using chemometrics. A traditional and an experimental diet containing flaxseed flour were used to feed the fish for 60 days. An increase of 18:3 n-3 and 22:6 n-3 and a decrease of 18:2 n-6 were observed in the tilapia fillets fed the experimental diet. There was a reduction in the n-6:n-3 ratio. A period of 45 days of incorporation caused a significant change in tilapia chemical composition. Principal Component Analysis showed that the time periods of 45 and 60 days positively contributed to the total content of n-3, LNA, and DHA, highlighting the effect of omega-3 incorporation in the treatment containing flaxseed flour.
Multivariate study of Nile tilapia byproducts enriched with omega-3 and dried with different methods
Resumo:
Abstract The present work aimed at studying the effect of different drying methods applied to tilapia byproducts (heads, viscera and carcasses) fed with flaxseed, verifying the contents of omega-3 fatty acids. Two diets were given to the tilapia: a control and a flaxseed formulation, over the course of 60 days. After this period, they were slaughtered and their byproducts (heads, viscera and carcasses) were collected. These fish parts were analyzed in natura, lyophilized and oven dried. Byproducts from tilapia fed with flaxseed presented docosapentaenoic, eicopentaenoic and docosahexanoic fatty acids as a result of the enzymatic metabolism of the fish. The byproducts from the oven drying process had lower levels of polyunsaturated fatty acids. In the multivariate analysis, the byproducts from fish fed with flaxseed had a greater composition of fatty acids. The addition of flaxseed in fish diets, as well as the utilization of their byproducts, may become a good business strategy. Additionally, the byproducts may be dried to facilitate transport and storage.