961 resultados para PARTITION


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Mg/Ca and Sr/Ca ratios of living ostracods belonging to 15 different species and sampled monthly over a one year-cycle at five sites (2, 5, 13, 33, and 70 m water depths) in western Lake Geneva (Switzerland) are compared to the oxygen and carbon isotope compositions measured on the same samples as well as to the temperature and chemical composition of the water (δ18OH2O, δ13CDIC, Mg/CaH2O, and Sr/CaH2O) at the time of ostracod calcification. The results indicate that trace element incorporation varied at the species level, mainly because of the ecological and biological differences between the different species (life-cycle, (micro-)habitat preference, biomineralisation processes) and the control thereof on trace element incorporation of the ostracods. In littoral zones, the Mg/Ca and Sr/Ca of ostracod valves increase as temperature and Mg/Ca and Sr/Ca of water increase during spring and summer, hence reflecting mainly seasonal variations. However, given that for Lake Geneva the Mg/Ca and Sr/Ca of water also vary with temperature, it is not possible to distinguish the effects of temperature from those of changes in chemical composition of water on the trace element content in ostracod valves. Results support that both water temperature and water Mg/Ca and Sr/Ca ratios control the final trace element content of Cyprididae valves. In contrast, the trace element content of species living in deeper zones of the basin is influenced by variations in the chemical composition of the pore water for the infaunal species. Trace element content measured for these specimens cannot, therefore, be used to reconstruct the compositions of the water lake bottom. In addition, incorporation of Mg and Sr into the shell differs from one family, sub-family, or even species to the other. This suggests that the distinctive Mg and Sr partition coefcients for the analysed taxa result from different valve calcification strategies that may be phylogenetic.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Immobile location-allocation (LA) problems is a type of LA problem that consists in determining the service each facility should offer in order to optimize some criterion (like the global demand), given the positions of the facilities and the customers. Due to the complexity of the problem, i.e. it is a combinatorial problem (where is the number of possible services and the number of facilities) with a non-convex search space with several sub-optimums, traditional methods cannot be applied directly to optimize this problem. Thus we proposed the use of clustering analysis to convert the initial problem into several smaller sub-problems. By this way, we presented and analyzed the suitability of some clustering methods to partition the commented LA problem. Then we explored the use of some metaheuristic techniques such as genetic algorithms, simulated annealing or cuckoo search in order to solve the sub-problems after the clustering analysis

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Piecewise linear models systems arise as mathematical models of systems in many practical applications, often from linearization for nonlinear systems. There are two main approaches of dealing with these systems according to their continuous or discrete-time aspects. We propose an approach which is based on the state transformation, more particularly the partition of the phase portrait in different regions where each subregion is modeled as a two-dimensional linear time invariant system. Then the Takagi-Sugeno model, which is a combination of local model is calculated. The simulation results show that the Alpha partition is well-suited for dealing with such a system

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Globalization involves several facility location problems that need to be handled at large scale. Location Allocation (LA) is a combinatorial problem in which the distance among points in the data space matter. Precisely, taking advantage of the distance property of the domain we exploit the capability of clustering techniques to partition the data space in order to convert an initial large LA problem into several simpler LA problems. Particularly, our motivation problem involves a huge geographical area that can be partitioned under overall conditions. We present different types of clustering techniques and then we perform a cluster analysis over our dataset in order to partition it. After that, we solve the LA problem applying simulated annealing algorithm to the clustered and non-clustered data in order to work out how profitable is the clustering and which of the presented methods is the most suitable

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Antigen-specific T-cell activation implicates a redistribution of plasma membrane-bound molecules in lipid rafts, such as the coreceptors CD8 and CD4, the Src kinases Lek and Fyn, and the linker for activation of T cells (LAT), that results in the formation of signaling complexes. These molecules partition in lipid rafts because of palmitoylation of cytoplasmic, membrane proximal cysteines, which is essential for their functional integrity in T-cell activation. Here, we show that exogenous dipalmitoyl-phosphatidylethanolamine (DPPE), but not the related unsaturated dioleoyl-phosphatidylethanolamine (DOPE), partitions in lipid rafts. DPPE inhibits activation of CD8(+) T lymphocytes by sensitized syngeneic antigen-presenting cells or specific major histocompatibility complex (MHC) peptide tetramers, as indicated by esterase release and intracellular calcium mobilization. Cytotoxic, T lymphocyte (CTL)-target cell conjugate formation is not affected by DPPE, indicating that engagement of the T-cell receptor by its cognate ligand is intact in lipid-treated cells. In contrast to other agents known to block raft-dependent signaling, DPPE efficiently inhibits the MHC peptide-induced recruitment of palmitoylated signaling molecules to lipid rafts and CTL activation without affecting cell viability or lipid raft integrity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ECG criteria for left ventricular hypertrophy (LVH) have been almost exclusively elaborated and calibrated in white populations. Because several interethnic differences in ECG characteristics have been found, the applicability of these criteria to African individuals remains to be demonstrated. We therefore investigated the performance of classic ECG criteria for LVH detection in an African population. Digitized 12-lead ECG tracings were obtained from 334 African individuals randomly selected from the general population of the Republic of Seychelles (Indian Ocean). Left ventricular mass was calculated with M-mode echocardiography and indexed to body height. LVH was defined by taking the 95th percentile of body height-indexed LVM values in a reference subgroup. In the entire study sample, 16 men and 15 women (prevalence 9.3%) were finally declared to have LVH, of whom 9 were of the reference subgroup. Sensitivity, specificity, accuracy, and positive and negative predictive values for LVH were calculated for 9 classic ECG criteria, and receiver operating characteristic curves were computed. We also generated a new composite time-voltage criterion with stepwise multiple linear regression: weighted time-voltage criterion=(0.2366R(aVL)+0.0551R(V5)+0.0785S(V3)+ 0.2993T(V1))xQRS duration. The Sokolow-Lyon criterion reached the highest sensitivity (61%) and the R(aVL) voltage criterion reached the highest specificity (97%) when evaluated at their traditional partition value. However, at a fixed specificity of 95%, the sensitivity of these 10 criteria ranged from 16% to 32%. Best accuracy was obtained with the R(aVL) voltage criterion and the new composite time-voltage criterion (89% for both). Positive and negative predictive values varied considerably depending on the concomitant presence of 3 clinical risk factors for LVH (hypertension, age >/=50 years, overweight). Median positive and negative predictive values of the 10 ECG criteria were 15% and 95%, respectively, for subjects with none or 1 of these risk factors compared with 63% and 76% for subjects with all of them. In conclusion, the performance of classic ECG criteria for LVH detection was largely disparate and appeared to be lower in this population of East African origin than in white subjects. A newly generated composite time-voltage criterion might provide improved performance. The predictive value of ECG criteria for LVH was considerably enhanced with the integration of information on concomitant clinical risk factors for LVH.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Reliable estimates of heavy-truck volumes are important in a number of transportation applications. Estimates of truck volumes are necessary for pavement design and pavement management. Truck volumes are important in traffic safety. The number of trucks on the road also influences roadway capacity and traffic operations. Additionally, heavy vehicles pollute at higher rates than passenger vehicles. Consequently, reliable estimates of heavy-truck vehicle miles traveled (VMT) are important in creating accurate inventories of on-road emissions. This research evaluated three different methods to calculate heavy-truck annual average daily traffic (AADT) which can subsequently be used to estimate vehicle miles traveled (VMT). Traffic data from continuous count stations provided by the Iowa DOT were used to estimate AADT for two different truck groups (single-unit and multi-unit) using the three methods. The first method developed monthly and daily expansion factors for each truck group. The second and third methods created general expansion factors for all vehicles. Accuracy of the three methods was compared using n-fold cross-validation. In n-fold cross-validation, data are split into n partitions, and data from the nth partition are used to validate the remaining data. A comparison of the accuracy of the three methods was made using the estimates of prediction error obtained from cross-validation. The prediction error was determined by averaging the squared error between the estimated AADT and the actual AADT. Overall, the prediction error was the lowest for the method that developed expansion factors separately for the different truck groups for both single- and multi-unit trucks. This indicates that use of expansion factors specific to heavy trucks results in better estimates of AADT, and, subsequently, VMT, than using aggregate expansion factors and applying a percentage of trucks. Monthly, daily, and weekly traffic patterns were also evaluated. Significant variation exists in the temporal and seasonal patterns of heavy trucks as compared to passenger vehicles. This suggests that the use of aggregate expansion factors fails to adequately describe truck travel patterns.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Error-correcting codes and matroids have been widely used in the study of ordinary secret sharing schemes. In this paper, the connections between codes, matroids, and a special class of secret sharing schemes, namely, multiplicative linear secret sharing schemes (LSSSs), are studied. Such schemes are known to enable multiparty computation protocols secure against general (nonthreshold) adversaries.Two open problems related to the complexity of multiplicative LSSSs are considered in this paper. The first one deals with strongly multiplicative LSSSs. As opposed to the case of multiplicative LSSSs, it is not known whether there is an efficient method to transform an LSSS into a strongly multiplicative LSSS for the same access structure with a polynomial increase of the complexity. A property of strongly multiplicative LSSSs that could be useful in solving this problem is proved. Namely, using a suitable generalization of the well-known Berlekamp–Welch decoder, it is shown that all strongly multiplicative LSSSs enable efficient reconstruction of a shared secret in the presence of malicious faults. The second one is to characterize the access structures of ideal multiplicative LSSSs. Specifically, the considered open problem is to determine whether all self-dual vector space access structures are in this situation. By the aforementioned connection, this in fact constitutes an open problem about matroid theory, since it can be restated in terms of representability of identically self-dual matroids by self-dual codes. A new concept is introduced, the flat-partition, that provides a useful classification of identically self-dual matroids. Uniform identically self-dual matroids, which are known to be representable by self-dual codes, form one of the classes. It is proved that this property also holds for the family of matroids that, in a natural way, is the next class in the above classification: the identically self-dual bipartite matroids.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

L'objet de la thèse est l'étude de l'endecasillabo dans l'oeuvre poétique de Giacomo Leopardi. D'un point de vue méthodologique, ce travail s'appuie sur la recherche de Marco Praloran et Arnaldo Soldani dédiée à l'endecasillabo de Pétrarque. L'auteur a comparé les Canti de Leopardi avec la production des plus grands écrivains de la fin du XIXème siècle et du début du XXème siècle Melchiorre Cesarotti (Poesie di Ossian), Giuseppe Parini (Il Mattino), Vittorio Alfieri (Le Rime e il Saul), Vincenzo Monti (l'ouvre poétique), Ugo Foscolo (Le Rime) et Alessandro Manzoni (l'oeuvre poétique). Nous avons là un répertoire de plusieurs milliers de vers qui ont été scandés non pas par l'intermédiaire d'un ordinateur, mais un à un en fonction de leur intonation. La première partie de la thèse est dédiée à l'analyse du rythme des différents auteurs, à des statistiques générales qui permettent de comparer ces données entre elles et avec les anciens auteurs italiens tels Dante, Pétrarque, Arioste, etc. De cette façon, nous pouvons avoir une vision globale de la prosodie italienne des origines jusqu'au XIXème siècle -vision qui permet de focaliser de manière exhaustive la technique de Leopardi. Dans la deuxième partie, l'auteur propose texte après texte la scansion de toute l'oeuvre poétique de Leopardi. Nous avons donc une lecture interprétative du rythíne de tous les poèmes qui tient compte de l'année de composition et les différentes typologies métriques des textes : par exemple, les canzoni, les idilli, les canti pisano-recanatesi. Dans cette deuxième partie, il faut souligner l'effort de lier la partition du rythme au contenu des différents textes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Drivers Scheduling Problem (DSP) consists of selecting a set of duties for vehicle drivers, for example buses, trains, plane or boat drivers or pilots, for the transportation of passengers or goods. This is a complex problem because it involves several constraints related to labour and company rules and can also present different evaluation criteria and objectives. Being able to develop an adequate model for this problem that can represent the real problem as close as possible is an important research area.The main objective of this research work is to present new mathematical models to the DSP problem that represent all the complexity of the drivers scheduling problem, and also demonstrate that the solutions of these models can be easily implemented in real situations. This issue has been recognized by several authors and as important problem in Public Transportation. The most well-known and general formulation for the DSP is a Set Partition/Set Covering Model (SPP/SCP). However, to a large extend these models simplify some of the specific business aspects and issues of real problems. This makes it difficult to use these models as automatic planning systems because the schedules obtained must be modified manually to be implemented in real situations. Based on extensive passenger transportation experience in bus companies in Portugal, we propose new alternative models to formulate the DSP problem. These models are also based on Set Partitioning/Covering Models; however, they take into account the bus operator issues and the perspective opinions and environment of the user.We follow the steps of the Operations Research Methodology which consist of: Identify the Problem; Understand the System; Formulate a Mathematical Model; Verify the Model; Select the Best Alternative; Present the Results of theAnalysis and Implement and Evaluate. All the processes are done with close participation and involvement of the final users from different transportation companies. The planner s opinion and main criticisms are used to improve the proposed model in a continuous enrichment process. The final objective is to have a model that can be incorporated into an information system to be used as an automatic tool to produce driver schedules. Therefore, the criteria for evaluating the models is the capacity to generate real and useful schedules that can be implemented without many manual adjustments or modifications. We have considered the following as measures of the quality of the model: simplicity, solution quality and applicability. We tested the alternative models with a set of real data obtained from several different transportation companies and analyzed the optimal schedules obtained with respect to the applicability of the solution to the real situation. To do this, the schedules were analyzed by the planners to determine their quality and applicability. The main result of this work is the proposition of new mathematical models for the DSP that better represent the realities of the passenger transportation operators and lead to better schedules that can be implemented directly in real situations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose an alternative method for measuring intergenerational mobility. Measurements obtained fromtraditional methods (based on panel data) are scarce, difficult to compare across countries and almost impossible to get across time. In particular, this means that we do not know how intergenerational mobility is correlated with growth, income or the degree of inequality.Our proposal is to measure the informative content of surnames in one census. The more information thesurname has on the income of an individual, the more important is her background in determining her outcomes; and thus, the less mobility there is.The reason is that surnames provide information about family relationships because the distribution ofsurnames is necessarily very skewed. A large percentage of the population is bound to have a very unfrequent surname. For them the partition generated by surnames is very informative on family linkages.First, we develop a model whose endogenous variable is the joint distribution of surnames and income.There, we explore the relationship between mobility and the informative content of surnames. We allow for assortative mating to be a determinant of both.Second, we use our methodology to show that in large Spanish region the informative content of surnamesis large and consistent with the model. We also show that it has increased over time, indicating a substantial drop in the degree of mobility. Finally, using the peculiarities of the Spanish surname convention we show that the degree of assortative mating has also increased over time, in such a manner that might explain the decrease in mobility observed.Our method allows us to provide measures of mobility comparable across time. It should also allow us tostudy other issues related to inheritance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

166 countries have some kind of public old age pension. What economic forcescreate and sustain old age Social Security as a public program? We document some of the internationally and historically common features of Social Security programs including explicit and implicit taxes on labor supply, pay-as-you-go features, intergenerational redistribution, benefits which areincreasing functions of lifetime earnings and not means-tested. We partition theories of Social Security into three groups: "political", "efficiency" and "narrative" theories. We explore three political theories in this paper: the majority rational voting model (with its two versions: "the elderly as the leaders of a winning coalition with the poor" and the "once and for all election" model), the "time-intensive model of political competition" and the "taxpayer protection model". Each of the explanations is compared with the international and historical facts. A companion paper explores the "efficiency" and "narrative" theories, and derives implicationsof all the theories for replacing the typical pay-as-you-go system with a forced savings plan.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The shape of supercoiled DNA molecules in solution is directly visualized by cryo-electron microscopy of vitrified samples. We observe that: (i) supercoiled DNA molecules in solution adopt an interwound rather than a toroidal form, (ii) the diameter of the interwound superhelix changes from about 12 nm to 4 nm upon addition of magnesium salt to the solution and (iii) the partition of the linking deficit between twist and writhe can be quantitatively determined for individual molecules.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The monogenetic kinetoplastid protozoan parasite Herpetomonas samuelpessoai expresses a surface-exposed metalloprotease. Comparable to the Leishmania promastigote surface protease, or PSP, the protease of Herpetomonas is active at the surface of fixed and live organisms, and both enzymes display an identical cleavage specificity toward a nonapeptide substrate. The protease was enriched 440 times by partition into Triton X-114 followed by 2 steps of anion exchange chromatography. The 56-kDa enzyme is inhibited by the metal chelator 1,10-phenanthroline and is susceptible to cleavage by glycosyl-phosphatidylinositol phospholipase C (GPI-PLC). The conservation of an identical surface protease activity in these monogenetic and digenetic trypanosomatids suggests that the enzyme has a physiological function in the promastigote (insect) stage of these parasites.