965 resultados para Exact computation
Resumo:
Objectives and study method: The objective of this study is to develop exact algorithms that can be used as management tools for the agricultural production planning and to obtain exact solutions for two of the most well known twodimensional packing problems: the strip packing problem and the bin packing problem. For the agricultural production planning problem we propose a new hierarchical scheme of three stages to improve the current agricultural practices. The objective of the first stage is to delineate rectangular and homogeneous management zones into the farmer’s plots considering the physical and chemical soil properties. This is an important task because the soil properties directly affect the agricultural production planning. The methodology for this stage is based on a new method called “Positions and Covering” that first generates all the possible positions in which the plot can be delineated. Then, we use a mathematical model of linear programming to obtain the optimal physical and chemical management zone delineation of the plot. In the second stage the objective is to determine the optimal crop pattern that maximizes the farmer’s profit taken into account the previous management zones delineation. In this case, the crop pattern is affected by both management zones delineation, physical and chemical. A mixed integer linear programming is used to solve this stage. The objective of the last stage is to determine in real-time the amount of water to irrigate in each crop. This stage takes as input the solution of the crop planning stage, the atmospheric conditions (temperature, radiation, etc.), the humidity level in plots, and the physical management zones of plots, just to name a few. This procedure is made in real-time during each irrigation period. A linear programming is used to solve this problem. A breakthrough happen when we realize that we could propose some adaptations of the P&C methodology to obtain optimal solutions for the two-dimensional packing problem and the strip packing. We empirically show that our methodologies are efficient on instances based on real data for both problems: agricultural and two-dimensional packing problems. Contributions and conclusions: The exact algorithms showed in this study can be used in the making-decision support for agricultural planning and twodimensional packing problems. For the agricultural planning problem, we show that the implementation of the new hierarchical approach can improve the farmer profit between 5.27% until 8.21% through the optimization of the natural resources. An important characteristic of this problem is that the soil properties (physical and chemical) and the real-time factors (climate, humidity level, evapotranspiration, etc.) are incorporated. With respect to the two-dimensional packing problems, one of the main contributions of this study is the fact that we have demonstrate that many of the best solutions founded in literature by others approaches (heuristics approaches) are the optimal solutions. This is very important because some of these solutions were up to now not guarantee to be the optimal solutions.
Resumo:
We here present a sample MATLAB program for the numerical evaluation of the confluent hypergeometric function Φ2. This program is based on the calculation of the inverse Laplace transform using the algorithm suggested by Simon and Alouini in their reference textbook [1].
Resumo:
We propose a novel finite element formulation that significantly reduces the number of degrees of freedom necessary to obtain reasonably accurate approximations of the low-frequency component of the deformation in boundary-value problems. In contrast to the standard Ritz–Galerkin approach, the shape functions are defined on a Lie algebra—the logarithmic space—of the deformation function. We construct a deformation function based on an interpolation of transformations at the nodes of the finite element. In the case of the geometrically exact planar Bernoulli beam element presented in this work, these transformation functions at the nodes are given as rotations. However, due to an intrinsic coupling between rotational and translational components of the deformation function, the formulation provides for a good approximation of the deflection of the beam, as well as of the resultant forces and moments. As both the translational and the rotational components of the deformation function are defined on the logarithmic space, we propose to refer to the novel approach as the “Logarithmic finite element method”, or “LogFE” method.
Resumo:
Combinatorial optimization problems are typically tackled by the branch-and-bound paradigm. We propose to learn a variable selection policy for branch-and-bound in mixed-integer linear programming, by imitation learning on a diversified variant of the strong branching expert rule. We encode states as bipartite graphs and parameterize the policy as a graph convolutional neural network. Experiments on a series of synthetic problems demonstrate that our approach produces policies that can improve upon expert-designed branching rules on large problems, and generalize to instances significantly larger than seen during training.
Resumo:
In this thesis I show a triple new connection we found between quantum integrability, N=2 supersymmetric gauge theories and black holes perturbation theory. I use the approach of the ODE/IM correspondence between Ordinary Differential Equations (ODE) and Integrable Models (IM), first to connect basic integrability functions - the Baxter’s Q, T and Y functions - to the gauge theory periods. This fundamental identification allows several new results for both theories, for example: an exact non linear integral equation (Thermodynamic Bethe Ansatz, TBA) for the gauge periods; an interpretation of the integrability functional relations as new exact R-symmetry relations for the periods; new formulas for the local integrals of motion in terms of gauge periods. This I develop in all details at least for the SU(2) gauge theory with Nf=0,1,2 matter flavours. Still through to the ODE/IM correspondence, I connect the mathematically precise definition of quasinormal modes of black holes (having an important role in gravitational waves’ obervations) with quantization conditions on the Q, Y functions. In this way I also give a mathematical explanation of the recently found connection between quasinormal modes and N=2 supersymmetric gauge theories. Moreover, it follows a new simple and effective method to numerically compute the quasinormal modes - the TBA - which I compare with other standard methods. The spacetimes for which I show these in all details are in the simplest Nf=0 case the D3 brane in the Nf=1,2 case a generalization of extremal Reissner-Nordström (charged) black holes. Then I begin treating also the Nf=3,4 theories and argue on how our integrability-gauge-gravity correspondence can generalize to other types of black holes in either asymptotically flat (Nf=3) or Anti-de-Sitter (Nf=4) spacetime. Finally I begin to show the extension to a 4-fold correspondence with also Conformal Field Theory (CFT), through the renowned AdS/CFT correspondence.
Resumo:
This dissertation investigates the relations between logic and TCS in the probabilistic setting. It is motivated by two main considerations. On the one hand, since their appearance in the 1960s-1970s, probabilistic models have become increasingly pervasive in several fast-growing areas of CS. On the other, the study and development of (deterministic) computational models has considerably benefitted from the mutual interchanges between logic and CS. Nevertheless, probabilistic computation was only marginally touched by such fruitful interactions. The goal of this thesis is precisely to (start) bring(ing) this gap, by developing logical systems corresponding to specific aspects of randomized computation and, therefore, by generalizing standard achievements to the probabilistic realm. To do so, our key ingredient is the introduction of new, measure-sensitive quantifiers associated with quantitative interpretations. The dissertation is tripartite. In the first part, we focus on the relation between logic and counting complexity classes. We show that, due to our classical counting propositional logic, it is possible to generalize to counting classes, the standard results by Cook and Meyer and Stockmeyer linking propositional logic and the polynomial hierarchy. Indeed, we show that the validity problem for counting-quantified formulae captures the corresponding level in Wagner's hierarchy. In the second part, we consider programming language theory. Type systems for randomized \lambda-calculi, also guaranteeing various forms of termination properties, were introduced in the last decades, but these are not "logically oriented" and no Curry-Howard correspondence is known for them. Following intuitions coming from counting logics, we define the first probabilistic version of the correspondence. Finally, we consider the relationship between arithmetic and computation. We present a quantitative extension of the language of arithmetic able to formalize basic results from probability theory. This language is also our starting point to define randomized bounded theories and, so, to generalize canonical results by Buss.
Resumo:
The aim of this thesis is to present exact and heuristic algorithms for the integrated planning of multi-energy systems. The idea is to disaggregate the energy system, starting first with its core the Central Energy System, and then to proceed towards the Decentral part. Therefore, a mathematical model for the generation expansion operations to optimize the performance of a Central Energy System system is first proposed. To ensure that the proposed generation operations are compatible with the network, some extensions of the existing network are considered as well. All these decisions are evaluated both from an economic viewpoint and from an environmental perspective, as specific constraints related to greenhouse gases emissions are imposed in the formulation. Then, the thesis presents an optimization model for solar organic Rankine cycle in the context of transactive energy trading. In this study, the impact that this technology can have on the peer-to-peer trading application in renewable based community microgrids is inspected. Here the consumer becomes a prosumer and engages actively in virtual trading with other prosumers at the distribution system level. Moreover, there is an investigation of how different technological parameters of the solar Organic Rankine Cycle may affect the final solution. Finally, the thesis introduces a tactical optimization model for the maintenance operations’ scheduling phase of a Combined Heat and Power plant. Specifically, two types of cleaning operations are considered, i.e., online cleaning and offline cleaning. Furthermore, a piecewise linear representation of the electric efficiency variation curve is included. Given the challenge of solving the tactical management model, a heuristic algorithm is proposed. The heuristic works by solving the daily operational production scheduling problem, based on the final consumer’s demand and on the electricity prices. The aggregate information from the operational problem is used to derive maintenance decisions at a tactical level.
Resumo:
Most cognitive functions require the encoding and routing of information across distributed networks of brain regions. Information propagation is typically attributed to physical connections existing between brain regions, and contributes to the formation of spatially correlated activity patterns, known as functional connectivity. While structural connectivity provides the anatomical foundation for neural interactions, the exact manner in which it shapes functional connectivity is complex and not yet fully understood. Additionally, traditional measures of directed functional connectivity only capture the overall correlation between neural activity, and provide no insight on the content of transmitted information, limiting their ability in understanding neural computations underlying the distributed processing of behaviorally-relevant variables. In this work, we first study the relationship between structural and functional connectivity in simulated recurrent spiking neural networks with spike timing dependent plasticity. We use established measures of time-lagged correlation and overall information propagation to infer the temporal evolution of synaptic weights, showing that measures of dynamic functional connectivity can be used to reliably reconstruct the evolution of structural properties of the network. Then, we extend current methods of directed causal communication between brain areas, by deriving an information-theoretic measure of Feature-specific Information Transfer (FIT) quantifying the amount, content and direction of information flow. We test FIT on simulated data, showing its key properties and advantages over traditional measures of overall propagated information. We show applications of FIT to several neural datasets obtained with different recording methods (magneto and electro-encephalography, spiking activity, local field potentials) during various cognitive functions, ranging from sensory perception to decision making and motor learning. Overall, these analyses demonstrate the ability of FIT to advance the investigation of communication between brain regions, uncovering the previously unaddressed content of directed information flow.
Resumo:
In this work, we develop a randomized bounded arithmetic for probabilistic computation, following the approach adopted by Buss for non-randomized computation. This work relies on a notion of representability inspired by of Buss' one, but depending on a non-standard quantitative and measurable semantic. Then, we establish that the representable functions are exactly the ones in PPT. Finally, we extend the language of our arithmetic with a measure quantifier, which is true if and only if the quantified formula's semantic has measure greater than a given threshold. This allows us to define purely logical characterizations of standard probabilistic complexity classes such as BPP, RP, co-RP and ZPP.
Resumo:
Hydroxyurea (HU), or hydroxycarbamide, is used for the treatment of some myeloproliferative and neoplastic diseases, and is currently the only drug approved by the FDA for use in sickle cell disease (SCD). Despite the relative success of HU therapy for SCD, a genetic disorder of the hemoglobin β chain that results in red-cell sickling, hemolysis, vascular inflammation and recurrent vasoocclusion, the exact mechanisms by which HU actuates remain unclear. We hypothesized that HU may modulate endothelial angiogenic processes, with important consequences for vascular inflammation. The effects of HU (50-200 μM; 17-24 h) on endothelial cell functions associated with key steps of angiogenesis were evaluated using human umbilical vein endothelial cell (HUVEC) cultures. Expression profiles of the HIF1A gene and the miRNAs 221 and 222, involved in endothelial function, were also determined in HUVECs following HU administration and the direct in vivo antiangiogenic effects of HU were assessed using a mouse Matrigel-plug neovascularization assay. Following incubation with HU, HUVECs exhibited high cell viability, but displayed a significant 75% inhibition in the rate of capillary-like-structure formation, and significant decreases in proliferative and invasive capacities. Furthermore, HU significantly decreased HIF1A expression, and induced the expression of miRNA 221, while downregulating miRNA 222. In vivo, HU reduced vascular endothelial growth factor (VEGF)-induced vascular development in Matrigel implants over 7 days. Findings indicate that HU is able to inhibit vessel assembly, a crucial angiogenic process, both in vitro and in vivo, and suggest that some of HU's therapeutic effects may occur through novel vascular mechanisms.
Resumo:
To analyze the effects of treatment approach on the outcomes of newborns (birth weight [BW] < 1,000 g) with patent ductus arteriosus (PDA), from the Brazilian Neonatal Research Network (BNRN) on: death, bronchopulmonary dysplasia (BPD), severe intraventricular hemorrhage (IVH III/IV), retinopathy of prematurity requiring surgical (ROPsur), necrotizing enterocolitis requiring surgery (NECsur), and death/BPD. This was a multicentric, cohort study, retrospective data collection, including newborns (BW < 1000 g) with gestational age (GA) < 33 weeks and echocardiographic diagnosis of PDA, from 16 neonatal units of the BNRN from January 1, 2010 to Dec 31, 2011. Newborns who died or were transferred until the third day of life, and those with presence of congenital malformation or infection were excluded. Groups: G1 - conservative approach (without treatment), G2 - pharmacologic (indomethacin or ibuprofen), G3 - surgical ligation (independent of previous treatment). Factors analyzed: antenatal corticosteroid, cesarean section, BW, GA, 5 min. Apgar score < 4, male gender, Score for Neonatal Acute Physiology Perinatal Extension (SNAPPE II), respiratory distress syndrome (RDS), late sepsis (LS), mechanical ventilation (MV), surfactant (< 2 h of life), and time of MV. death, O2 dependence at 36 weeks (BPD36wks), IVH III/IV, ROPsur, NECsur, and death/BPD36wks. Student's t-test, chi-squared test, or Fisher's exact test; Odds ratio (95% CI); logistic binary regression and backward stepwise multiple regression. Software: MedCalc (Medical Calculator) software, version 12.1.4.0. p-values < 0.05 were considered statistically significant. 1,097 newborns were selected and 494 newborns were included: G1 - 187 (37.8%), G2 - 205 (41.5%), and G3 - 102 (20.6%). The highest mortality was observed in G1 (51.3%) and the lowest in G3 (14.7%). The highest frequencies of BPD36wks (70.6%) and ROPsur were observed in G3 (23.5%). The lowest occurrence of death/BPD36wks occurred in G2 (58.0%). Pharmacological (OR 0.29; 95% CI: 0.14-0.62) and conservative (OR 0.34; 95% CI: 0.14-0.79) treatments were protective for the outcome death/BPD36wks. The conservative approach of PDA was associated to high mortality, the surgical approach to the occurrence of BPD36wks and ROPsur, and the pharmacological treatment was protective for the outcome death/BPD36wks.
Resumo:
To assess the effects of a soy dietary supplement on the main biomarkers of cardiovascular health in postmenopausal women compared with the effects of low-dose hormone therapy (HT) and placebo. Double-blind, randomized and controlled intention-to-treat trial. Sixty healthy postmenopausal women, aged 40-60 years, 4.1 years mean time since menopause were recruited and randomly assigned to 3 groups: a soy dietary supplement group (isoflavone 90mg), a low-dose HT group (estradiol 1 mg plus noretisterone 0.5 mg) and a placebo group. Lipid profile, glucose level, body mass index, blood pressure and abdominal/hip ratio were evaluated in all the participants at baseline and after 16 weeks. Statistical analyses were performed using the χ2 test, Fisher's exact test, Kruskal-Wallis non-parametric test, analysis of variance (ANOVA), paired Student's t-test and Wilcoxon test. After a 16-week intervention period, total cholesterol decreased 11.3% and LDL-cholesterol decreased 18.6% in the HT group, but both did not change in the soy dietary supplement and placebo groups. Values for triglycerides, HDL-cholesterol, glucose level, body mass index, blood pressure and abdominal/hip ratio did not change over time in any of the three groups. The use of dietary soy supplement did not show any significant favorable effect on cardiovascular health biomarkers compared with HT. The trial is registered at the Brazilian Clinical Trials Registry (Registro Brasileiro de Ensaios Clínicos - ReBEC), number RBR-76mm75.
Resumo:
The gold standard for diagnosing cystic fibrosis (CF) is a sweat chloride value above 60 mEq/L. However, this historical and important tool has limitations; other techniques should be studied, including the nasal potential difference (NPD) test. CFTR gene sequencing can identify CFTR mutations, but this method is time-consuming and too expensive to be used in all CF centers. The present study compared CF patients with two classes I-III CFTR mutations (10 patients) (G1), CF patients with classes IV-VI CFTR mutations (five patients) (G2), and 21 healthy subjects (G3). The CF patients and healthy subjects also underwent the NPD test. A statistical analysis was performed using the Mann-Whitney, Kruskal-Wallis, χ(2), and Fisher's exact tests, α = 0.05. No differences were observed between the CF patients and healthy controls for the PDMax, Δamiloride, and Δchloride + free + amiloride markers from the NPD test. For the finger value, a difference between G2 and G3 was described. The Wilschanski index values were different between G1 and G3. In conclusion, our data showed that NPD is useful for CF diagnosis when classes I-III CFTR mutations are screened. However, if classes IV-VI are considered, the NPD test showed an overlap in values with healthy subjects.
Resumo:
Diagnostic imaging techniques play an important role in assessing the exact location, cause, and extent of a nerve lesion, thus allowing clinicians to diagnose and manage more effectively a variety of pathological conditions, such as entrapment syndromes, traumatic injuries, and space-occupying lesions. Ultrasound and nuclear magnetic resonance imaging are becoming useful methods for this purpose, but they still lack spatial resolution. In this regard, recent phase contrast x-ray imaging experiments of peripheral nerve allowed the visualization of each nerve fiber surrounded by its myelin sheath as clearly as optical microscopy. In the present study, we attempted to produce high-resolution x-ray phase contrast images of a human sciatic nerve by using synchrotron radiation propagation-based imaging. The images showed high contrast and high spatial resolution, allowing clear identification of each fascicle structure and surrounding connective tissue. The outstanding result is the detection of such structures by phase contrast x-ray tomography of a thick human sciatic nerve section. This may further enable the identification of diverse pathological patterns, such as Wallerian degeneration, hypertrophic neuropathy, inflammatory infiltration, leprosy neuropathy and amyloid deposits. To the best of our knowledge, this is the first successful phase contrast x-ray imaging experiment of a human peripheral nerve sample. Our long-term goal is to develop peripheral nerve imaging methods that could supersede biopsy procedures.
Resumo:
Lateral pterygoid muscle (LPM) plays an important role in jaw movement and has been implicated in Temporomandibular disorders (TMDs). Migraine has been described as a common symptom in patients with TMDs and may be related to muscle hyperactivity. This study aimed to compare LPM volume in individuals with and without migraine, using segmentation of the LPM in magnetic resonance (MR) imaging of the TMJ. Twenty patients with migraine and 20 volunteers without migraine underwent a clinical examination of the TMJ, according to the Research Diagnostic Criteria for TMDs. MR imaging was performed and the LPM was segmented using the ITK-SNAP 1.4.1 software, which calculates the volume of each segmented structure in voxels per cubic millimeter. The chi-squared test and the Fisher's exact test were used to relate the TMD variables obtained from the MR images and clinical examinations to the presence of migraine. Logistic binary regression was used to determine the importance of each factor for predicting the presence of a migraine headache. Patients with TMDs and migraine tended to have hypertrophy of the LPM (58.7%). In addition, abnormal mandibular movements (61.2%) and disc displacement (70.0%) were found to be the most common signs in patients with TMDs and migraine. In patients with TMDs and simultaneous migraine, the LPM tends to be hypertrophic. LPM segmentation on MR imaging may be an alternative method to study this muscle in such patients because the hypertrophic LPM is not always palpable.