955 resultados para Partition graphique
Resumo:
Crop water requirements are important elements for food production, especially in arid and semiarid regions. These regions are experience increasing population growth and less water for agriculture, which amplifies the need for more efficient irrigation. Improved water use efficiency is needed to produce more food while conserving water as a limited natural resource. Evaporation (E) from bare soil and Transpiration (T) from plants is considered a critical part of the global water cycle and, in recent decades, climate change could lead to increased E and T. Because energy is required to break hydrogen bonds and vaporize water, water and energy balances are closely connected. The soil water balance is also linked with water vapour losses to evapotranspiration (ET) that are dependent mainly on energy balance at the Earth’s surface. This work addresses the role of evapotranspiration for water use efficiency by developing a mathematical model that improves the accuracy of crop evapotranspiration calculation; accounting for the effects of weather conditions, e.g., wind speed and humidity, on crop coefficients, which relates crop evapotranspiration to reference evapotranspiration. The ability to partition ET into Evaporation and Transpiration components will help irrigation managers to find ways to improve water use efficiency by decreasing the ratio of evaporation to transpiration. The developed crop coefficient model will improve both irrigation scheduling and water resources planning in response to future climate change, which can improve world food production and water use efficiency in agriculture.
Resumo:
This dissertation mimics the Turkish college admission procedure. It started with the purpose to reduce the inefficiencies in Turkish market. For this purpose, we propose a mechanism under a new market structure; as we prefer to call, semi-centralization. In chapter 1, we give a brief summary of Matching Theory. We present the first examples in Matching history with the most general papers and mechanisms. In chapter 2, we propose our mechanism. In real life application, that is in Turkish university placements, the mechanism reduces the inefficiencies of the current system. The success of the mechanism depends on the preference profile. It is easy to show that under complete information the mechanism implements the full set of stable matchings for a given profile. In chapter 3, we refine our basic mechanism. The modification on the mechanism has a crucial effect on the results. The new mechanism is, as we call, a middle mechanism. In one of the subdomain, this mechanism coincides with the original basic mechanism. But, in the other partition, it gives the same results with Gale and Shapley's algorithm. In chapter 4, we apply our basic mechanism to well known Roommate Problem. Since the roommate problem is in one-sided game patern, firstly we propose an auxiliary function to convert the game semi centralized two-sided game, because our basic mechanism is designed for this framework. We show that this process is succesful in finding a stable matching in the existence of stability. We also show that our mechanism easily and simply tells us if a profile lacks of stability by using purified orderings. Finally, we show a method to find all the stable matching in the existence of multi stability. The method is simply to run the mechanism for all of the top agents in the social preference.
Resumo:
In this work I reported recent results in the field of Statistical Mechanics of Equilibrium, and in particular in Spin Glass models and Monomer Dimer models . We start giving the mathematical background and the general formalism for Spin (Disordered) Models with some of their applications to physical and mathematical problems. Next we move on general aspects of the theory of spin glasses, in particular to the Sherrington-Kirkpatrick model which is of fundamental interest for the work. In Chapter 3, we introduce the Multi-species Sherrington-Kirkpatrick model (MSK), we prove the existence of the thermodynamical limit and the Guerra's Bound for the quenched pressure together with a detailed analysis of the annealed and the replica symmetric regime. The result is a multidimensional generalization of the Parisi's theory. Finally we brie y illustrate the strategy of the Panchenko's proof of the lower bound. In Chapter 4 we discuss the Aizenmann-Contucci and the Ghirlanda-Guerra identities for a wide class of Spin Glass models. As an example of application, we discuss the role of these identities in the proof of the lower bound. In Chapter 5 we introduce the basic mathematical formalism of Monomer Dimer models. We introduce a Gaussian representation of the partition function that will be fundamental in the rest of the work. In Chapter 6, we introduce an interacting Monomer-Dimer model. Its exact solution is derived and a detailed study of its analytical properties and related physical quantities is performed. In Chapter 7, we introduce a quenched randomness in the Monomer Dimer model and show that, under suitable conditions the pressure is a self averaging quantity. The main result is that, if we consider randomness only in the monomer activity, the model is exactly solvable.
Resumo:
Il lungo processo che avrebbe portato Ottaviano alla conquista del potere e alla fondazione del principato è scandito da alcuni momenti chiave e da diversi personaggi che ne accompagnarono l'ascesa. Prima ancora che sul campo di battaglia, il figlio adottivo di Cesare riuscì a primeggiare per la grande capacità di gestire alleanze e rapporti personali, per la grande maestria con la quale riuscì a passare da capo rivoluzionario a rappresentante e membro dell'aristocrazia tradizionale. Non fu un cammino facile e lineare e forse il compito più difficile non fu sbaragliare gli avversari ad Azio, ma conservare un potere che gli fu costantemente contestato. Ancora dopo il 31 a.C., infatti, in più di un'occasione, Augusto fu chiamato a difendere la sua creatura (il principato) e a procedere a modificarne costantemente base di potere e struttura: solamente attraverso questa fondamentale, minuziosa, ma nascosta opera, egli riuscì a porre le basi per una struttura di potere destinata a durare immutata almeno per un secolo. In base a queste premesse, la ricerca è organizzata secondo un duplice criterio cronologico, inserendo -all'interno della cornice rappresentata dagli eventi- una partizione che tenga presente ulteriori cesure e momenti determinanti. Il proposito è quello di sottolineare come all'interno di un regno unitario, caratterizzato dalla permanenza di un unico sovrano, sia possibile intravedere l'alternarsi di situazioni storiche diverse, di rapporti di forze, alleanze e unioni in virtù delle quali si determinino orientamenti differenti nell'ambito tanto della politica interna quanto di quella esterna.
Resumo:
Il Data Distribution Management (DDM) è un componente dello standard High Level Architecture. Il suo compito è quello di rilevare le sovrapposizioni tra update e subscription extent in modo efficiente. All'interno di questa tesi si discute la necessità di avere un framework e per quali motivi è stato implementato. Il testing di algoritmi per un confronto equo, librerie per facilitare la realizzazione di algoritmi, automatizzazione della fase di compilazione, sono motivi che sono stati fondamentali per iniziare la realizzazione framework. Il motivo portante è stato che esplorando articoli scientifici sul DDM e sui vari algoritmi si è notato che in ogni articolo si creavano dei dati appositi per fare dei test. L'obiettivo di questo framework è anche quello di riuscire a confrontare gli algoritmi con un insieme di dati coerente. Si è deciso di testare il framework sul Cloud per avere un confronto più affidabile tra esecuzioni di utenti diversi. Si sono presi in considerazione due dei servizi più utilizzati: Amazon AWS EC2 e Google App Engine. Sono stati mostrati i vantaggi e gli svantaggi dell'uno e dell'altro e il motivo per cui si è scelto di utilizzare Google App Engine. Si sono sviluppati quattro algoritmi: Brute Force, Binary Partition, Improved Sort, Interval Tree Matching. Sono stati svolti dei test sul tempo di esecuzione e sulla memoria di picco utilizzata. Dai risultati si evince che l'Interval Tree Matching e l'Improved Sort sono i più efficienti. Tutti i test sono stati svolti sulle versioni sequenziali degli algoritmi e che quindi ci può essere un riduzione nel tempo di esecuzione per l'algoritmo Interval Tree Matching.
Resumo:
In many patients, optimal results after pallidal deep brain stimulation (DBS) for primary dystonia may appear over several months, possibly beyond 1 year after implant. In order to elucidate the factors predicting such protracted clinical effect, we retrospectively reviewed the clinical records of 44 patients with primary dystonia and bilateral pallidal DBS implants. Patients with fixed skeletal deformities, as well as those with a history of prior ablative procedures, were excluded. The Burke-Fahn-Marsden Dystonia Rating Scale (BFMDRS) scores at baseline, 1 and 3 years after DBS were used to evaluate clinical outcome. All subjects showed a significant improvement after DBS implants (mean BFMDRS improvement of 74.9% at 1 year and 82.6% at 3 years). Disease duration (DD, median 15 years, range 2-42) and age at surgery (AS, median 31 years, range 10-59) showed a significant negative correlation with DBS outcome at 1 and 3 years. A partition analysis, using DD and AS, clustered subjects into three groups: (1) younger subjects with shorter DD (n = 19, AS < 27, DD ? 17); (2) older subjects with shorter DD (n = 8, DD ? 17, AS ? 27); (3) older subjects with longer DD (n = 17, DD > 17, AS ? 27). Younger patients with short DD benefitted more and faster than older patients, who however continued to improve 10% on average 1 year after DBS implants. Our data suggest that subjects with short DD may expect to achieve a better general outcome than those with longer DD and that AS may influence the time necessary to achieve maximal clinical response.
Resumo:
Semi-weak n-hyponormality is defined and studied using the notion of positive determinant partition. Several examples related to semi-weakly n-hyponormal weighted shifts are discussed. In particular, it is proved that there exists a semi-weakly three-hyponormal weighted shift W (alpha) with alpha (0) = alpha (1) < alpha (2) which is not two-hyponormal, which illustrates the gaps between various weak subnormalities.
Resumo:
Binding of hydrophobic chemicals to colloids such as proteins or lipids is difficult to measure using classical microdialysis methods due to low aqueous concentrations, adsorption to dialysis membranes and test vessels, and slow kinetics of equilibration. Here, we employed a three-phase partitioning system where silicone (polydimethylsiloxane, PDMS) serves as a third phase to determine partitioning between water and colloids and acts at the same time as a dosing device for hydrophobic chemicals. The applicability of this method was demonstrated with bovine serum albumin (BSA). Measured binding constants (K(BSAw)) for chlorpyrifos, methoxychlor, nonylphenol, and pyrene were in good agreement with an established quantitative structure-activity relationship (QSAR). A fifth compound, fluoxypyr-methyl-heptyl ester, was excluded from the analysis because of apparent abiotic degradation. The PDMS depletion method was then used to determine partition coefficients for test chemicals in rainbow trout (Oncorhynchus mykiss) liver S9 fractions (K(S9w)) and blood plasma (K(bloodw)). Measured K(S9w) and K(bloodw) values were consistent with predictions obtained using a mass-balance model that employs the octanol-water partition coefficient (K(ow)) as a surrogate for lipid partitioning and K(BSAw) to represent protein binding. For each compound, K(bloodw) was substantially greater than K(S9w), primarily because blood contains more lipid than liver S9 fractions (1.84% of wet weight vs 0.051%). Measured liver S9 and blood plasma binding parameters were subsequently implemented in an in vitro to in vivo extrapolation model to link the in vitro liver S9 metabolic degradation assay to in vivo metabolism in fish. Apparent volumes of distribution (V(d)) calculated from the experimental data were similar to literature estimates. However, the calculated binding ratios (f(u)) used to relate in vitro metabolic clearance to clearance by the intact liver were 10 to 100 times lower than values used in previous modeling efforts. Bioconcentration factors (BCF) predicted using the experimental binding data were substantially higher than the predicted values obtained in earlier studies and correlated poorly with measured BCF values in fish. One possible explanation for this finding is that chemicals bound to proteins can desorb rapidly and thus contribute to metabolic turnover of the chemicals. This hypothesis remains to be investigated in future studies, ideally with chemicals of higher hydrophobicity.
Resumo:
The goal of this paper is to contribute to the understanding of complex polynomials and Blaschke products, two very important function classes in mathematics. For a polynomial, $f,$ of degree $n,$ we study when it is possible to write $f$ as a composition $f=g\circ h$, where $g$ and $h$ are polynomials, each of degree less than $n.$ A polynomial is defined to be \emph{decomposable }if such an $h$ and $g$ exist, and a polynomial is said to be \emph{indecomposable} if no such $h$ and $g$ exist. We apply the results of Rickards in \cite{key-2}. We show that $$C_{n}=\{(z_{1},z_{2},...,z_{n})\in\mathbb{C}^{n}\,|\,(z-z_{1})(z-z_{2})...(z-z_{n})\,\mbox{is decomposable}\},$$ has measure $0$ when considered a subset of $\mathbb{R}^{2n}.$ Using this we prove the stronger result that $$D_{n}=\{(z_{1},z_{2},...,z_{n})\in\mathbb{C}^{n}\,|\,\mbox{There exists\,}a\in\mathbb{C}\,\,\mbox{with}\,\,(z-z_{1})(z-z_{2})...(z-z_{n})(z-a)\,\mbox{decomposable}\},$$ also has measure zero when considered a subset of $\mathbb{R}^{2n}.$ We show that for any polynomial $p$, there exists an $a\in\mathbb{C}$ such that $p(z)(z-a)$ is indecomposable, and we also examine the case of $D_{5}$ in detail. The main work of this paper studies finite Blaschke products, analytic functions on $\overline{\mathbb{D}}$ that map $\partial\mathbb{D}$ to $\partial\mathbb{D}.$ In analogy with polynomials, we discuss when a degree $n$ Blaschke product, $B,$ can be written as a composition $C\circ D$, where $C$ and $D$ are finite Blaschke products, each of degree less than $n.$ Decomposable and indecomposable are defined analogously. Our main results are divided into two sections. First, we equate a condition on the zeros of the Blaschke product with the existence of a decomposition where the right-hand factor, $D,$ has degree $2.$ We also equate decomposability of a Blaschke product, $B,$ with the existence of a Poncelet curve, whose foci are a subset of the zeros of $B,$ such that the Poncelet curve satisfies certain tangency conditions. This result is hard to apply in general, but has a very nice geometric interpretation when we desire a composition where the right-hand factor is degree 2 or 3. Our second section of finite Blaschke product results builds off of the work of Cowen in \cite{key-3}. For a finite Blaschke product $B,$ Cowen defines the so-called monodromy group, $G_{B},$ of the finite Blaschke product. He then equates the decomposability of a finite Blaschke product, $B,$ with the existence of a nontrivial partition, $\mathcal{P},$ of the branches of $B^{-1}(z),$ such that $G_{B}$ respects $\mathcal{P}$. We present an in-depth analysis of how to calculate $G_{B}$, extending Cowen's description. These methods allow us to equate the existence of a decomposition where the left-hand factor has degree 2, with a simple condition on the critical points of the Blaschke product. In addition we are able to put a condition of the structure of $G_{B}$ for any decomposable Blaschke product satisfying certain normalization conditions. The final section of this paper discusses how one can put the results of the paper into practice to determine, if a particular Blaschke product is decomposable. We compare three major algorithms. The first is a brute force technique where one searches through the zero set of $B$ for subsets which could be the zero set of $D$, exhaustively searching for a successful decomposition $B(z)=C(D(z)).$ The second algorithm involves simply examining the cardinality of the image, under $B,$ of the set of critical points of $B.$ For a degree $n$ Blaschke product, $B,$ if this cardinality is greater than $\frac{n}{2}$, the Blaschke product is indecomposable. The final algorithm attempts to apply the geometric interpretation of decomposability given by our theorem concerning the existence of a particular Poncelet curve. The final two algorithms can be implemented easily with the use of an HTML
Resumo:
Features encapsulate the domain knowledge of a software system and thus are valuable sources of information for a reverse engineer. When analyzing the evolution of a system, we need to know how and which features were modified to recover both the change intention and its extent, namely which source artifacts are affected. Typically, the implementation of a feature crosscuts a number of source artifacts. To obtain a mapping between features to the source artifacts, we exercise the features and capture their execution traces. However this results in large traces that are difficult to interpret. To tackle this issue we compact the traces into simple sets of source artifacts that participate in a feature's runtime behavior. We refer to these compacted traces as feature views. Within a feature view, we partition the source artifacts into disjoint sets of characterized software entities. The characterization defines the level of participation of a source entity in the features. We then analyze the features over several versions of a system and we plot their evolution to reveal how and hich features were affected by changes in the code. We show the usefulness of our approach by applying it to a case study where we address the problem of merging parallel development tracks of the same system.
Resumo:
Professor Sir David R. Cox (DRC) is widely acknowledged as among the most important scientists of the second half of the twentieth century. He inherited the mantle of statistical science from Pearson and Fisher, advanced their ideas, and translated statistical theory into practice so as to forever change the application of statistics in many fields, but especially biology and medicine. The logistic and proportional hazards models he substantially developed, are arguably among the most influential biostatistical methods in current practice. This paper looks forward over the period from DRC's 80th to 90th birthdays, to speculate about the future of biostatistics, drawing lessons from DRC's contributions along the way. We consider "Cox's model" of biostatistics, an approach to statistical science that: formulates scientific questions or quantities in terms of parameters gamma in probability models f(y; gamma) that represent in a parsimonious fashion, the underlying scientific mechanisms (Cox, 1997); partition the parameters gamma = theta, eta into a subset of interest theta and other "nuisance parameters" eta necessary to complete the probability distribution (Cox and Hinkley, 1974); develops methods of inference about the scientific quantities that depend as little as possible upon the nuisance parameters (Barndorff-Nielsen and Cox, 1989); and thinks critically about the appropriate conditional distribution on which to base infrences. We briefly review exciting biomedical and public health challenges that are capable of driving statistical developments in the next decade. We discuss the statistical models and model-based inferences central to the CM approach, contrasting them with computationally-intensive strategies for prediction and inference advocated by Breiman and others (e.g. Breiman, 2001) and to more traditional design-based methods of inference (Fisher, 1935). We discuss the hierarchical (multi-level) model as an example of the future challanges and opportunities for model-based inference. We then consider the role of conditional inference, a second key element of the CM. Recent examples from genetics are used to illustrate these ideas. Finally, the paper examines causal inference and statistical computing, two other topics we believe will be central to biostatistics research and practice in the coming decade. Throughout the paper, we attempt to indicate how DRC's work and the "Cox Model" have set a standard of excellence to which all can aspire in the future.
Resumo:
Radiolabeled somatostatin analogues have been successfully used for targeted radiotherapy and for imaging of somatostatin receptor (sst1-5)-positive tumors. Nevertheless, these analogues are subject to improving their tumor-to-nontarget ratio to enhance their diagnostic or therapeutic properties, preventing nephrotoxicity. In order to understand the influence of lipophilicity and charge on the pharmacokinetic profile of [1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraacetic acid (DOTA)]-somatostatin-based radioligands such as [DOTA,1-Nal3]-octreotide (DOTA-NOC), different spacers (X) based on 8-amino-3,6-dioxaoctanoic acid (PEG2), 15-amino-4,7,10,13-tetraoxapentadecanoic acid (PEG4), N-acetyl glucosamine (GlcNAc), triglycine, beta-alanine, aspartic acid, and lysine were introduced between the chelator DOTA and the peptide NOC. All DOTA-X-NOC conjugates were synthesized by Fmoc solid-phase synthesis. The partition coefficient (log D) at pH = 7.4 indicated that higher hydrophilicity than [111In-DOTA]-NOC was achieved with the introduction of the mentioned spacers, except with triglycine and beta-alanine. The high affinity of [InIII-DOTA]-NOC for human sst2 (hsst2) was preserved with the structural modifications, while an overall drop for hsst3 affinity was observed, except in the case of [InIII-DOTA]-beta-Ala-NOC. The new conjugates preserved the good affinity for hsst5, except for [InIII-DOTA]-Asn(GlcNAc)-NOC, which showed decreased affinity. A significant 1.2-fold improvement in the specific internalization rate in AR4-2J rat pancreatic tumor cells (sst2 receptor expression) at 4 h was achieved with the introduction of Asp as a spacer in the parent compound. In sst3-expressing HEK cells, the specific internalization rate at 4 h for [111In-DOTA]-NOC (13.1% +/- 0.3%) was maintained with [111In-DOTA]-beta-Ala-NOC (14.0% +/- 1.8%), but the remaining derivatives showed <2% specific internalization. Biodistribution studies were performed with Lewis rats bearing the AR4-2J rat pancreatic tumor. In comparison to [111In-DOTA]-NOC (2.96% +/- 0.48% IA/g), the specific uptake in the tumor at 4 h p.i. was significantly improved for the 111In-labeled sugar analogue (4.17% +/- 0.46% IA/g), which among all the new derivatives presented the best tumor-to-kidney ratio (1.9).
Resumo:
A k-cycle decomposition of order n is a partition of the edges of the complete graph on n vertices into k-cycles. In this report a backtracking algorithm is developed to count the number of inequivalent k-cycle decompositions of order n.
Resumo:
Fuzzy community detection is to identify fuzzy communities in a network, which are groups of vertices in the network such that the membership of a vertex in one community is in [0,1] and that the sum of memberships of vertices in all communities equals to 1. Fuzzy communities are pervasive in social networks, but only a few works have been done for fuzzy community detection. Recently, a one-step forward extension of Newman’s Modularity, the most popular quality function for disjoint community detection, results into the Generalized Modularity (GM) that demonstrates good performance in finding well-known fuzzy communities. Thus, GMis chosen as the quality function in our research. We first propose a generalized fuzzy t-norm modularity to investigate the effect of different fuzzy intersection operators on fuzzy community detection, since the introduction of a fuzzy intersection operation is made feasible by GM. The experimental results show that the Yager operator with a proper parameter value performs better than the product operator in revealing community structure. Then, we focus on how to find optimal fuzzy communities in a network by directly maximizing GM, which we call it Fuzzy Modularity Maximization (FMM) problem. The effort on FMM problem results into the major contribution of this thesis, an efficient and effective GM-based fuzzy community detection method that could automatically discover a fuzzy partition of a network when it is appropriate, which is much better than fuzzy partitions found by existing fuzzy community detection methods, and a crisp partition of a network when appropriate, which is competitive with partitions resulted from the best disjoint community detections up to now. We address FMM problem by iteratively solving a sub-problem called One-Step Modularity Maximization (OSMM). We present two approaches for solving this iterative procedure: a tree-based global optimizer called Find Best Leaf Node (FBLN) and a heuristic-based local optimizer. The OSMM problem is based on a simplified quadratic knapsack problem that can be solved in linear time; thus, a solution of OSMM can be found in linear time. Since the OSMM algorithm is called within FBLN recursively and the structure of the search tree is non-deterministic, we can see that the FMM/FBLN algorithm runs in a time complexity of at least O (n2). So, we also propose several highly efficient and very effective heuristic algorithms namely FMM/H algorithms. We compared our proposed FMM/H algorithms with two state-of-the-art community detection methods, modified MULTICUT Spectral Fuzzy c-Means (MSFCM) and Genetic Algorithm with a Local Search strategy (GALS), on 10 real-world data sets. The experimental results suggest that the H2 variant of FMM/H is the best performing version. The H2 algorithm is very competitive with GALS in producing maximum modularity partitions and performs much better than MSFCM. On all the 10 data sets, H2 is also 2-3 orders of magnitude faster than GALS. Furthermore, by adopting a simply modified version of the H2 algorithm as a mutation operator, we designed a genetic algorithm for fuzzy community detection, namely GAFCD, where elite selection and early termination are applied. The crossover operator is designed to make GAFCD converge fast and to enhance GAFCD’s ability of jumping out of local minimums. Experimental results on all the data sets show that GAFCD uncovers better community structure than GALS.
Resumo:
Certain fatty acid N-alkyl amides from the medicinal plant Echinacea activate cannabinoid type-2 (CB2) receptors. In this study we show that the CB2-binding Echinacea constituents dodeca-2E,4E-dienoic acid isobutylamide (1) and dodeca-2E,4E,8Z,10Z-tetraenoic acid isobutylamide (2) form micelles in aqueous medium. In contrast, micelle formation is not observed for undeca-2E-ene-8,10-diynoic acid isobutylamide (3), which does not bind to CB2, or structurally related endogenous cannabinoids, such as arachidonoyl ethanolamine (anandamide). The critical micelle concentration (CMC) range of 1 and 2 was determined by fluorescence spectroscopy as 200-300 and 7400-10000 nM, respectively. The size of premicelle aggregates, micelles, and supermicelles was studied by dynamic light scattering. Microscopy images show that compound 1, but not 2, forms globular and rod-like supermicelles with radii of approximately 75 nm. The self-assembling N-alkyl amides partition between themselves and the CB2 receptor, and aggregation of N-alkyl amides thus determines their in vitro pharmacological effects. Molecular mechanics by Monte Carlo simulations of the aggregation process support the experimental data, suggesting that both 1 and 2 can readily aggregate into premicelles, but only 1 spontaneously assembles into larger aggregates. These findings have important implications for biological studies with this class of compounds.