933 resultados para Julia sets


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aspergillus fumigatus grows well at neutral and acidic pH in a medium containing protein as the sole nitrogen source by secreting two different sets of proteases. Neutral pH favors the secretion of neutral and alkaline endoproteases, leucine aminopeptidases (Laps) which are nonspecific monoaminopeptidases, and an X-prolyl dipeptidase (DppIV). Acidic pH environment promotes the secretion of an aspartic endoprotease of pepsin family (Pep1) and tripeptidyl-peptidases of the sedolisin family (SedB and SedD). A novel prolyl peptidase, AfuS28, was found to be secreted in both alkaline and acidic conditions. In previous studies, Laps were shown to degrade peptides from their N-terminus until an X-Pro sequence acts as a stop signal. X-Pro sequences can be then removed by DppIV, which allows Laps access to the following residues. We have shown that at acidic pH Seds degrade large peptides from their N-terminus into tripeptides until Pro in P1 or P'1 position acts as a stop for these exopeptidases. However, X-X-Pro and X-X-X-Pro sequences can be removed by AfuS28 thus allowing Seds further sequential proteolysis. In conclusion, both alkaline and acidic sets of proteases contain exoprotease activity capable of cleaving after proline residues that cannot be removed during sequential digestion by nonspecific exopeptidases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Anorexia nervosa (AN) is a complex and heritable eating disorder characterized by dangerously low body weight. Neither candidate gene studies nor an initial genome-wide association study (GWAS) have yielded significant and replicated results. We performed a GWAS in 2907 cases with AN from 14 countries (15 sites) and 14 860 ancestrally matched controls as part of the Genetic Consortium for AN (GCAN) and the Wellcome Trust Case Control Consortium 3 (WTCCC3). Individual association analyses were conducted in each stratum and meta-analyzed across all 15 discovery data sets. Seventy-six (72 independent) single nucleotide polymorphisms were taken forward for in silico (two data sets) or de novo (13 data sets) replication genotyping in 2677 independent AN cases and 8629 European ancestry controls along with 458 AN cases and 421 controls from Japan. The final global meta-analysis across discovery and replication data sets comprised 5551 AN cases and 21 080 controls. AN subtype analyses (1606 AN restricting; 1445 AN binge-purge) were performed. No findings reached genome-wide significance. Two intronic variants were suggestively associated: rs9839776 (P=3.01 × 10(-7)) in SOX2OT and rs17030795 (P=5.84 × 10(-6)) in PPP3CA. Two additional signals were specific to Europeans: rs1523921 (P=5.76 × 10(-)(6)) between CUL3 and FAM124B and rs1886797 (P=8.05 × 10(-)(6)) near SPATA13. Comparing discovery with replication results, 76% of the effects were in the same direction, an observation highly unlikely to be due to chance (P=4 × 10(-6)), strongly suggesting that true findings exist but our sample, the largest yet reported, was underpowered for their detection. The accrual of large genotyped AN case-control samples should be an immediate priority for the field.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissolved organic matter (DOM) is a complex mixture of organic compounds, ubiquitous in marine and freshwater systems. Fluorescence spectroscopy, by means of Excitation-Emission Matrices (EEM), has become an indispensable tool to study DOM sources, transport and fate in aquatic ecosystems. However the statistical treatment of large and heterogeneous EEM data sets still represents an important challenge for biogeochemists. Recently, Self-Organising Maps (SOM) has been proposed as a tool to explore patterns in large EEM data sets. SOM is a pattern recognition method which clusterizes and reduces the dimensionality of input EEMs without relying on any assumption about the data structure. In this paper, we show how SOM, coupled with a correlation analysis of the component planes, can be used both to explore patterns among samples, as well as to identify individual fluorescence components. We analysed a large and heterogeneous EEM data set, including samples from a river catchment collected under a range of hydrological conditions, along a 60-km downstream gradient, and under the influence of different degrees of anthropogenic impact. According to our results, chemical industry effluents appeared to have unique and distinctive spectral characteristics. On the other hand, river samples collected under flash flood conditions showed homogeneous EEM shapes. The correlation analysis of the component planes suggested the presence of four fluorescence components, consistent with DOM components previously described in the literature. A remarkable strength of this methodology was that outlier samples appeared naturally integrated in the analysis. We conclude that SOM coupled with a correlation analysis procedure is a promising tool for studying large and heterogeneous EEM data sets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents a topological approach to studying fuzzy setsby means of modifier operators. Modifier operators are mathematical models, e.g., for hedges, and we present briefly different approaches to studying modifier operators. We are interested in compositional modifier operators, modifiers for short, and these modifiers depend on binary relations. We show that if a modifier depends on a reflexive and transitive binary relation on U, then there exists a unique topology on U such that this modifier is the closure operator in that topology. Also, if U is finite then there exists a lattice isomorphism between the class of all reflexive and transitive relations and the class of all topologies on U. We define topological similarity relation "≈" between L-fuzzy sets in an universe U, and show that the class LU/ ≈ is isomorphic with the class of all topologies on U, if U is finite and L is suitable. We consider finite bitopological spaces as approximation spaces, and we show that lower and upper approximations can be computed by means of α-level sets also in the case of equivalence relations. This means that approximations in the sense of Rough Set Theory can be computed by means of α-level sets. Finally, we present and application to data analysis: we study an approach to detecting dependencies of attributes in data base-like systems, called information systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mechanistic soil-crop models have become indispensable tools to investigate the effect of management practices on the productivity or environmental impacts of arable crops. Ideally these models may claim to be universally applicable because they simulate the major processes governing the fate of inputs such as fertiliser nitrogen or pesticides. However, because they deal with complex systems and uncertain phenomena, site-specific calibration is usually a prerequisite to ensure their predictions are realistic. This statement implies that some experimental knowledge on the system to be simulated should be available prior to any modelling attempt, and raises a tremendous limitation to practical applications of models. Because the demand for more general simulation results is high, modellers have nevertheless taken the bold step of extrapolating a model tested within a limited sample of real conditions to a much larger domain. While methodological questions are often disregarded in this extrapolation process, they are specifically addressed in this paper, and in particular the issue of models a priori parameterisation. We thus implemented and tested a standard procedure to parameterize the soil components of a modified version of the CERES models. The procedure converts routinely-available soil properties into functional characteristics by means of pedo-transfer functions. The resulting predictions of soil water and nitrogen dynamics, as well as crop biomass, nitrogen content and leaf area index were compared to observations from trials conducted in five locations across Europe (southern Italy, northern Spain, northern France and northern Germany). In three cases, the model’s performance was judged acceptable when compared to experimental errors on the measurements, based on a test of the model’s root mean squared error (RMSE). Significant deviations between observations and model outputs were however noted in all sites, and could be ascribed to various model routines. In decreasing importance, these were: water balance, the turnover of soil organic matter, and crop N uptake. A better match to field observations could therefore be achieved by visually adjusting related parameters, such as field-capacity water content or the size of soil microbial biomass. As a result, model predictions fell within the measurement errors in all sites for most variables, and the model’s RMSE was within the range of published values for similar tests. We conclude that the proposed a priori method yields acceptable simulations with only a 50% probability, a figure which may be greatly increased through a posteriori calibration. Modellers should thus exercise caution when extrapolating their models to a large sample of pedo-climatic conditions for which they have only limited information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissolved organic matter (DOM) is a complex mixture of organic compounds, ubiquitous in marine and freshwater systems. Fluorescence spectroscopy, by means of Excitation-Emission Matrices (EEM), has become an indispensable tool to study DOM sources, transport and fate in aquatic ecosystems. However the statistical treatment of large and heterogeneous EEM data sets still represents an important challenge for biogeochemists. Recently, Self-Organising Maps (SOM) has been proposed as a tool to explore patterns in large EEM data sets. SOM is a pattern recognition method which clusterizes and reduces the dimensionality of input EEMs without relying on any assumption about the data structure. In this paper, we show how SOM, coupled with a correlation analysis of the component planes, can be used both to explore patterns among samples, as well as to identify individual fluorescence components. We analysed a large and heterogeneous EEM data set, including samples from a river catchment collected under a range of hydrological conditions, along a 60-km downstream gradient, and under the influence of different degrees of anthropogenic impact. According to our results, chemical industry effluents appeared to have unique and distinctive spectral characteristics. On the other hand, river samples collected under flash flood conditions showed homogeneous EEM shapes. The correlation analysis of the component planes suggested the presence of four fluorescence components, consistent with DOM components previously described in the literature. A remarkable strength of this methodology was that outlier samples appeared naturally integrated in the analysis. We conclude that SOM coupled with a correlation analysis procedure is a promising tool for studying large and heterogeneous EEM data sets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Let $Q$ be a suitable real function on $C$. An $n$-Fekete set corresponding to $Q$ is a subset ${Z_{n1}},\dotsb, Z_{nn}}$ of $C$ which maximizes the expression $\Pi^n_i_{sets are, in a sense, maximally spread out with respect to the equilibrium measure. In general, our results apply only to a part of the Fekete set, which is at a certain distance away from the boundary of the droplet. However, for the potential $Q=|Z|^2$ we obtain results which hold globally, and we conjecture that such global results are true for a wide range of potentials.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The basic goal of this study is to extend old and propose new ways to generate knapsack sets suitable for use in public key cryptography. The knapsack problem and its cryptographic use are reviewed in the introductory chapter. Terminology is based on common cryptographic vocabulary. For example, solving the knapsack problem (which is here a subset sum problem) is termed decipherment. Chapter 1 also reviews the most famous knapsack cryptosystem, the Merkle Hellman system. It is based on a superincreasing knapsack and uses modular multiplication as a trapdoor transformation. The insecurity caused by these two properties exemplifies the two general categories of attacks against knapsack systems. These categories provide the motivation for Chapters 2 and 4. Chapter 2 discusses the density of a knapsack and the dangers of having a low density. Chapter 3 interrupts for a while the more abstract treatment by showing examples of small injective knapsacks and extrapolating conjectures on some characteristics of knapsacks of larger size, especially their density and number. The most common trapdoor technique, modular multiplication, is likely to cause insecurity, but as argued in Chapter 4, it is difficult to find any other simple trapdoor techniques. This discussion also provides a basis for the introduction of various categories of non injectivity in Chapter 5. Besides general ideas of non injectivity of knapsack systems, Chapter 5 introduces and evaluates several ways to construct such systems, most notably the "exceptional blocks" in superincreasing knapsacks and the usage of "too small" a modulus in the modular multiplication as a trapdoor technique. The author believes that non injectivity is the most promising direction for development of knapsack cryptosystema. Chapter 6 modifies two well known knapsack schemes, the Merkle Hellman multiplicative trapdoor knapsack and the Graham Shamir knapsack. The main interest is in aspects other than non injectivity, although that is also exploited. In the end of the chapter, constructions proposed by Desmedt et. al. are presented to serve as a comparison for the developments of the subsequent three chapters. Chapter 7 provides a general framework for the iterative construction of injective knapsacks from smaller knapsacks, together with a simple example, the "three elements" system. In Chapters 8 and 9 the general framework is put into practice in two different ways. Modularly injective small knapsacks are used in Chapter 9 to construct a large knapsack, which is called the congruential knapsack. The addends of a subset sum can be found by decrementing the sum iteratively by using each of the small knapsacks and their moduli in turn. The construction is also generalized to the non injective case, which can lead to especially good results in the density, without complicating the deciphering process too much. Chapter 9 presents three related ways to realize the general framework of Chapter 7. The main idea is to join iteratively small knapsacks, each element of which would satisfy the superincreasing condition. As a whole, none of these systems need become superincreasing, though the development of density is not better than that. The new knapsack systems are injective but they can be deciphered with the same searching method as the non injective knapsacks with the "exceptional blocks" in Chapter 5. The final Chapter 10 first reviews the Chor Rivest knapsack system, which has withstood all cryptanalytic attacks. A couple of modifications to the use of this system are presented in order to further increase the security or make the construction easier. The latter goal is attempted by reducing the size of the Chor Rivest knapsack embedded in the modified system. '

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We introduce a method for surface reconstruction from point sets that is able to cope with noise and outliers. First, a splat-based representation is computed from the point set. A robust local 3D RANSAC-based procedure is used to filter the point set for outliers, then a local jet surface - a low-degree surface approximation - is fitted to the inliers. Second, we extract the reconstructed surface in the form of a surface triangle mesh through Delaunay refinement. The Delaunay refinement meshing approach requires computing intersections between line segment queries and the surface to be meshed. In the present case, intersection queries are solved from the set of splats through a 1D RANSAC procedure