862 resultados para Complexity of Relations
Resumo:
The advent of new signal processing methods, such as non-linear analysis techniques, represents a new perspective which adds further value to brain signals' analysis. Particularly, Lempel–Ziv's Complexity (LZC) has proven to be useful in exploring the complexity of the brain electromagnetic activity. However, an important problem is the lack of knowledge about the physiological determinants of these measures. Although acorrelation between complexity and connectivity has been proposed, this hypothesis was never tested in vivo. Thus, the correlation between the microstructure of the anatomic connectivity and the functional complexity of the brain needs to be inspected. In this study we analyzed the correlation between LZC and fractional anisotropy (FA), a scalar quantity derived from diffusion tensors that is particularly useful as an estimate of the functional integrity of myelinated axonal fibers, in a group of sixteen healthy adults (all female, mean age 65.56 ± 6.06 years, intervals 58–82). Our results showed a positive correlation between FA and LZC scores in regions including clusters in the splenium of the corpus callosum, cingulum, parahipocampal regions and the sagittal stratum. This study supports the notion of a positive correlation between the functional complexity of the brain and the microstructure of its anatomical connectivity. Our investigation proved that a combination of neuroanatomical and neurophysiological techniques may shed some light on the underlying physiological determinants of brain's oscillations
Resumo:
Over the last decade, Grid computing paved the way for a new level of large scale distributed systems. This infrastructure made it possible to securely and reliably take advantage of widely separated computational resources that are part of several different organizations. Resources can be incorporated to the Grid, building a theoretical virtual supercomputer. In time, cloud computing emerged as a new type of large scale distributed system, inheriting and expanding the expertise and knowledge that have been obtained so far. Some of the main characteristics of Grids naturally evolved into clouds, others were modified and adapted and others were simply discarded or postponed. Regardless of these technical specifics, both Grids and clouds together can be considered as one of the most important advances in large scale distributed computing of the past ten years; however, this step in distributed computing has came along with a completely new level of complexity. Grid and cloud management mechanisms play a key role, and correct analysis and understanding of the system behavior are needed. Large scale distributed systems must be able to self-manage, incorporating autonomic features capable of controlling and optimizing all resources and services. Traditional distributed computing management mechanisms analyze each resource separately and adjust specific parameters of each one of them. When trying to adapt the same procedures to Grid and cloud computing, the vast complexity of these systems can make this task extremely complicated. But large scale distributed systems complexity could only be a matter of perspective. It could be possible to understand the Grid or cloud behavior as a single entity, instead of a set of resources. This abstraction could provide a different understanding of the system, describing large scale behavior and global events that probably would not be detected analyzing each resource separately. In this work we define a theoretical framework that combines both ideas, multiple resources and single entity, to develop large scale distributed systems management techniques aimed at system performance optimization, increased dependability and Quality of Service (QoS). The resulting synergy could be the key 350 J. Montes et al. to address the most important difficulties of Grid and cloud management.
Resumo:
Several authors have analysed the changes of the probability density function of the solar radiation with different time resolutions. Some others have approached to study the significance of these changes when produced energy calculations are attempted. We have undertaken different transformations to four Spanish databases in order to clarify the interrelationship between radiation models and produced energy estimations. Our contribution is straightforward: the complexity of a solar radiation model needed for yearly energy calculations, is very low. Twelve values of monthly mean of solar radiation are enough to estimate energy with errors below 3%. Time resolutions better than hourly samples do not improve significantly the result of energy estimations.
Resumo:
The Semantics Difficulty Model (SDM) is a model that measures the difficult of introducing semantics technology into a company. SDM manages three descriptions of stages, which we will refer to as ?snapshots?: a company semantic snapshot, data snapshot and semantic application snapshot. Understanding a priory the complexity of introducing semantics into a company is important because it allows the organization to take early decisions, thus saving time and money, mitigating risks and improving innovation, time to market and productivity. SDM works by measuring the distance between each initial snapshot and its reference models (the company semantic snapshots reference model, data snapshots reference model, and the semantic application snapshots reference model) with Euclidian distances. The difficulty level will be "not at all difficult" when the distance is small, and becomes "extremely difficult" when the the distance is large. SDM has been tested experimentally with 2000 simulated companies with arrangements and several initial stages. The output is measured by five linguistic values: "not at all difficult, slightly difficult, averagely difficult, very difficult and extremely difficult". As the preliminary results of our SDM simulation model indicate, transforming a search application into integrated data from different sources with semantics is a "slightly difficult", in contrast with data and opinion extraction applications for which it is "very difficult".
Resumo:
PURPOSE The decision-making process plays a key role in organizations. Every decision-making process produces a final choice that may or may not prompt action. Recurrently, decision makers find themselves in the dichotomous question of following a traditional sequence decision-making process where the output of a decision is used as the input of the next stage of the decision, or following a joint decision-making approach where several decisions are taken simultaneously. The implication of the decision-making process will impact different players of the organization. The choice of the decision- making approach becomes difficult to find, even with the current literature and practitioners’ knowledge. The pursuit of better ways for making decisions has been a common goal for academics and practitioners. Management scientists use different techniques and approaches to improve different types of decisions. The purpose of this decision is to use the available resources as well as possible (data and techniques) to achieve the objectives of the organization. The developing and applying of models and concepts may be helpful to solve managerial problems faced every day in different companies. As a result of this research different decision models are presented to contribute to the body of knowledge of management science. The first models are focused on the manufacturing industry and the second part of the models on the health care industry. Despite these models being case specific, they serve the purpose of exemplifying that different approaches to the problems and could provide interesting results. Unfortunately, there is no universal recipe that could be applied to all the problems. Furthermore, the same model could deliver good results with certain data and bad results for other data. A framework to analyse the data before selecting the model to be used is presented and tested in the models developed to exemplify the ideas. METHODOLOGY As the first step of the research a systematic literature review on the joint decision is presented, as are the different opinions and suggestions of different scholars. For the next stage of the thesis, the decision-making process of more than 50 companies was analysed in companies from different sectors in the production planning area at the Job Shop level. The data was obtained using surveys and face-to-face interviews. The following part of the research into the decision-making process was held in two application fields that are highly relevant for our society; manufacturing and health care. The first step was to study the interactions and develop a mathematical model for the replenishment of the car assembly where the problem of “Vehicle routing problem and Inventory” were combined. The next step was to add the scheduling or car production (car sequencing) decision and use some metaheuristics such as ant colony and genetic algorithms to measure if the behaviour is kept up with different case size problems. A similar approach is presented in a production of semiconductors and aviation parts, where a hoist has to change from one station to another to deal with the work, and a jobs schedule has to be done. However, for this problem simulation was used for experimentation. In parallel, the scheduling of operating rooms was studied. Surgeries were allocated to surgeons and the scheduling of operating rooms was analysed. The first part of the research was done in a Teaching hospital, and for the second part the interaction of uncertainty was added. Once the previous problem had been analysed a general framework to characterize the instance was built. In the final chapter a general conclusion is presented. FINDINGS AND PRACTICAL IMPLICATIONS The first part of the contributions is an update of the decision-making literature review. Also an analysis of the possible savings resulting from a change in the decision process is made. Then, the results of the survey, which present a lack of consistency between what the managers believe and the reality of the integration of their decisions. In the next stage of the thesis, a contribution to the body of knowledge of the operation research, with the joint solution of the replenishment, sequencing and inventory problem in the assembly line is made, together with a parallel work with the operating rooms scheduling where different solutions approaches are presented. In addition to the contribution of the solving methods, with the use of different techniques, the main contribution is the framework that is proposed to pre-evaluate the problem before thinking of the techniques to solve it. However, there is no straightforward answer as to whether it is better to have joint or sequential solutions. Following the proposed framework with the evaluation of factors such as the flexibility of the answer, the number of actors, and the tightness of the data, give us important hints as to the most suitable direction to take to tackle the problem. RESEARCH LIMITATIONS AND AVENUES FOR FUTURE RESEARCH In the first part of the work it was really complicated to calculate the possible savings of different projects, since in many papers these quantities are not reported or the impact is based on non-quantifiable benefits. The other issue is the confidentiality of many projects where the data cannot be presented. For the car assembly line problem more computational power would allow us to solve bigger instances. For the operation research problem there was a lack of historical data to perform a parallel analysis in the teaching hospital. In order to keep testing the decision framework it is necessary to keep applying more case studies in order to generalize the results and make them more evident and less ambiguous. The health care field offers great opportunities since despite the recent awareness of the need to improve the decision-making process there are many opportunities to improve. Another big difference with the automotive industry is that the last improvements are not spread among all the actors. Therefore, in the future this research will focus more on the collaboration between academia and the health care sector.
Resumo:
Date of Acceptance: 5/04/2015 15 pages, 4 figures
Resumo:
The saliva of blood-sucking arthropods contains powerful pharmacologically active substances and may be a vaccine target against some vector-borne diseases. Subtractive cloning combined with biochemical approaches was used to discover activities in the salivary glands of the hematophagous fly Lutzomyia longipalpis. Sequences of nine full-length cDNA clones were obtained, five of which are possibly associated with blood-meal acquisition, each having cDNA similarity to: (i) the bed bug Cimex lectularius apyrase, (ii) a 5′-nucleotidase/phosphodiesterase, (iii) a hyaluronidase, (iv) a protein containing a carbohydrate-recognition domain (CRD), and (v) a RGD-containing peptide with no significant matches to known proteins in the blast databases. Following these findings, we observed that the salivary apyrase activity of L. longipalpis is indeed similar to that of Cimex apyrase in its metal requirements. The predicted isoelectric point of the putative apyrase matches the value found for Lutzomyia salivary apyrase. A 5′-nucleotidase, as well as hyaluronidase activity, was found in the salivary glands, and the CRD-containing cDNA matches the N-terminal sequence of the HPLC-purified salivary anticlotting protein. A cDNA similar to α-amylase was discovered and salivary enzymatic activity demonstrated for the first time in a blood-sucking arthropod. Full-length clones were also found coding for three proteins of unknown function matching, respectively, the N-terminal sequence of an abundant salivary protein, having similarity to the CAP superfamily of proteins and the Drosophila yellow protein. Finally, two partial sequences are reported that match possible housekeeping genes. Subtractive cloning will considerably enhance efforts to unravel the salivary pharmacopeia of blood-sucking arthropods.
Resumo:
Genetic analysis of plant–pathogen interactions has demonstrated that resistance to infection is often determined by the interaction of dominant plant resistance (R) genes and dominant pathogen-encoded avirulence (Avr) genes. It was postulated that R genes encode receptors for Avr determinants. A large number of R genes and their cognate Avr genes have now been analyzed at the molecular level. R gene loci are extremely polymorphic, particularly in sequences encoding amino acids of the leucine-rich repeat motif. A major challenge is to determine how Avr perception by R proteins triggers the plant defense response. Mutational analysis has identified several genes required for the function of specific R proteins. Here we report the identification of Rcr3, a tomato gene required specifically for Cf-2-mediated resistance. We propose that Avr products interact with host proteins to promote disease, and that R proteins “guard” these host components and initiate Avr-dependent plant defense responses.
Resumo:
We study a simple antiplane fault of finite length embedded in a homogeneous isotropic elastic solid to understand the origin of seismic source heterogeneity in the presence of nonlinear rate- and state-dependent friction. All the mechanical properties of the medium and friction are assumed homogeneous. Friction includes a characteristic length that is longer than the grid size so that our models have a well-defined continuum limit. Starting from a heterogeneous initial stress distribution, we apply a slowly increasing uniform stress load far from the fault and we simulate the seismicity for a few 1000 events. The style of seismicity produced by this model is determined by a control parameter associated with the degree of rate dependence of friction. For classical friction models with rate-independent friction, no complexity appears and seismicity is perfectly periodic. For weakly rate-dependent friction, large ruptures are still periodic, but small seismicity becomes increasingly nonstationary. When friction is highly rate-dependent, seismicity becomes nonperiodic and ruptures of all sizes occur inside the fault. Highly rate-dependent friction destabilizes the healing process producing premature healing of slip and partial stress drop. Partial stress drop produces large variations in the state of stress that in turn produce earthquakes of different sizes. Similar results have been found by other authors using the Burridge and Knopoff model. We conjecture that all models in which static stress drop is only a fraction of the dynamic stress drop produce stress heterogeneity.
Resumo:
High-level globin expression in erythroid precursor cells depends on the integrity of NF-E2 recognition sites, transcription factor AP-1-like protein-binding motifs, located in the upstream regulatory regions of the alpha- and beta-globin loci. The NF-E2 transcription factor, which recognizes these sites, is a heterodimer consisting of (i) p45 NF-E2 (the larger subunit), a hematopoietic-restricted basic leucine zipper protein, and (ii) a widely expressed basic leucine zipper factor, p18 NF-E2, the smaller subunit. p18 NF-E2 protein shares extensive homology with the maf protooncogene family. To determine an in vivo role for p18 NF-E2 protein we disrupted the p18 NF-E2-encoding gene by homologous recombination in murine embryonic stem cells and generated p18 NF-E2-/- mice. These mice are indistinguishable from littermates throughout all phases of development and remain healthy in adulthood. Despite the absence of expressed p18 NF-E2, DNA-binding activity with the properties of the NF-E2 heterodimer is present in fetal liver erythroid cells of p18 NF-E2-/- mice. We speculate that another member of the maf basic leucine zipper family substitutes for the p18 subunit in a complex with p45 NF-E2. Thus, p18 NF-E2 per se appears to be dispensable in vivo.
Resumo:
DNA probes from the L6 rust resistance gene of flax (Linum usitatissimum) hybridize to resistance genes at the unlinked M locus, indicating sequence similarities between genes at the two loci. Genetic and molecular data indicate that the L locus is simple and contains a single gene with 13 alleles and that the M locus is complex and contains a tandem array of genes of similar sequence. Thus the evolution of these two related loci has been different. The consequence of the contrasting structures of the L and M loci on the evolution of different rust resistance specificities can now be investigated at the molecular level
Resumo:
Semiotic components in the relations of complex systems depend on the Subject. There are two main semiotic components: Neutrosophic and Modal. Modal components are alethical and deontical. In this paper the authors applied the theory of Neutrosophy and Modal Logic to Deontical Impure Systems.