829 resultados para Two Approaches
Resumo:
Vatne [13] and Green and Marcos [9] have independently studied the Koszul-like homological properties of graded algebras that have defining relations in degree 2 and exactly one other degree. We contrast these two approaches, answer two questions posed by Green and Marcos, and find conditions that imply the corresponding Yoneda algebras are generated in the lowest possible degrees.
Resumo:
Two competitive concepts of umbilical cord blood (UCB) banking are currently available: either allogeneic UCB is donated to a public bank or autologous cells are stored in a private bank. Allogeneic-autologous hybrid banking is a new concept that combines these two approaches. However, acceptance of hybrid UCB banking among potential donors is unknown to date.
Resumo:
Khutoretsky dealt with the problem of maximising a linear utility function (MUF) over the set of short-term equilibria in a housing market by reducing it to a linear programming problem, and suggested a combinatorial algorithm for this problem. Two approaches to the market adjustment were considered: the funding of housing construction and the granting of housing allowances. In both cases, locally optimal regulatory measures can be developed using the corresponding dual prices. The optimal effects (with the regulation expenditures restricted by an amount K) can be found using specialised models based on MUF: a model M1 for choice of the optimum structure of investment in housing construction, and a model M2 for optimum distribution of housing allowances. The linear integer optimisation problems corresponding to these models are initially difficult but can be solved after slight modifications of the parameters. In particular, the necessary modification of K does not exceed the maximum construction cost of one dwelling (for M1) or the maximum size of one housing allowance (for M2). The result is particularly useful since slight modification of K is not essential in practice.
Resumo:
Studies of diagnostic accuracy require more sophisticated methods for their meta-analysis than studies of therapeutic interventions. A number of different, and apparently divergent, methods for meta-analysis of diagnostic studies have been proposed, including two alternative approaches that are statistically rigorous and allow for between-study variability: the hierarchical summary receiver operating characteristic (ROC) model (Rutter and Gatsonis, 2001) and bivariate random-effects meta-analysis (van Houwelingen and others, 1993), (van Houwelingen and others, 2002), (Reitsma and others, 2005). We show that these two models are very closely related, and define the circumstances in which they are identical. We discuss the different forms of summary model output suggested by the two approaches, including summary ROC curves, summary points, confidence regions, and prediction regions.
Resumo:
Current concepts of synaptic fine-structure are derived from electron microscopic studies of tissue fixed by chemical fixation using aldehydes. However, chemical fixation with glutaraldehyde and paraformaldehyde and subsequent dehydration in ethanol result in uncontrolled tissue shrinkage. While electron microscopy allows for the unequivocal identification of synaptic contacts, it cannot be used for real-time analysis of structural changes at synapses. For the latter purpose advanced fluorescence microscopy techniques are to be applied which, however, do not allow for the identification of synaptic contacts. Here, two approaches are described that may overcome, at least in part, some of these drawbacks in the study of synapses. By focusing on a characteristic, easily identifiable synapse, the mossy fiber synapse in the hippocampus, we first describe high-pressure freezing of fresh tissue as a method that may be applied to study subtle changes in synaptic ultrastructure associated with functional synaptic plasticity. Next, we propose to label presynaptic mossy fiber terminals and postsynaptic complex spines on CA3 pyramidal neurons by different fluorescent dyes to allow for the real-time monitoring of these synapses in living tissue over extended periods of time. We expect these approaches to lead to new insights into the structure and function of central synapses.
Resumo:
In the last decade, there has been an increasing interest in cognitive alterations during the early course of schizophrenia. From a clinical perspective, a better understanding of cognitive functioning in putative at-risk states for schizophrenia is essential for developing optimal early intervention models. Two approaches have more recently been combined to assess the entire course of the initial schizophrenia prodrome: the predictive "basic symptom at-risk" (BS) and the ultra high-risk (UHR) criteria. Basic symptoms are considered to be present during the entire disease progression, including the initial prodrome, while the onset of symptoms captured by the UHR criteria expresses further disease progression toward frank psychosis. The present study investigated the cognitive functioning in 93 subjects who met either BS or UHR criteria and thus were assumed to be at different points on the putative trajectory to psychosis. We compared them with 43 patients with a first episode of psychosis and to 49 help-seeking patient controls. All groups performed significantly below normative values. Both at-risk groups performed at intermediate levels between the first-episode (FE) group and normative values. The UHR group demonstrated intermediate performance between the FE and BS groups. Overall, auditory working memory, verbal fluency/processing speed, and declarative verbal memory were impaired the most. Our results suggest that cognitive impairments may still be modest in the early stages of the initial schizophrenia prodrome and thus support current efforts to intervene in the early course of impending schizophrenia because early intervention may prevent or delay the onset of frank psychosis and thus prevent further cognitive damage.
Resumo:
The demands in production and associate costs at power generation through non renewable resources are increasing at an alarming rate. Solar energy is one of the renewable resource that has the potential to minimize this increase. Utilization of solar energy have been concentrated mainly on heating application. The use of solar energy in cooling systems in building would benefit greatly achieving the goal of non-renewable energy minimization. The approaches of solar energy heating system research done by initiation such as University of Wisconsin at Madison and building heat flow model research conducted by Oklahoma State University can be used to develop and optimize solar cooling building system. The research uses two approaches to develop a Graphical User Interface (GUI) software for an integrated solar absorption cooling building model, which is capable of simulating and optimizing the absorption cooling system using solar energy as the main energy source to drive the cycle. The software was then put through a number of litmus test to verify its integrity. The litmus test was conducted on various building cooling system data sets of similar applications around the world. The output obtained from the software developed were identical with established experimental results from the data sets used. Software developed by other research are catered for advanced users. The software developed by this research is not only reliable in its code integrity but also through its integrated approach which is catered for new entry users. Hence, this dissertation aims to correctly model a complete building with the absorption cooling system in appropriate climate as a cost effective alternative to conventional vapor compression system.
Resumo:
Infrared thermography is a well-recognized non-destructive testing technique for evaluating concrete bridge elements such as bridge decks and piers. However, overcoming some obstacles and limitations are necessary to be able to add this invaluable technique to the bridge inspector's tool box. Infrared thermography is based on collecting radiant temperature and presenting the results as a thermal infrared image. Two methods considered in conducting an infrared thermography test include passive and active. The source of heat is the main difference between these two approaches of infrared thermography testing. Solar energy and ambient temperature change are the main heat sources in conducting a passive infrared thermography test, while active infrared thermography involves generating a temperature gradient using an external source of heat other than sun. Passive infrared thermography testing was conducted on three concrete bridge decks in Michigan. Ground truth information was gathered through coring several locations on each bridge deck to validate the results obtained from the passive infrared thermography test. Challenges associated with data collection and processing using passive infrared thermography are discussed and provide additional evidence to confirm that passive infrared thermography is a promising remote sensing tool for bridge inspections. To improve the capabilities of the infrared thermography technique for evaluation of the underside of bridge decks and bridge girders, an active infrared thermography technique using the surface heating method was developed in the laboratory on five concrete slabs with simulated delaminations. Results from this study demonstrated that active infrared thermography not only eliminates some limitations associated with passive infrared thermography, but also provides information regarding the depth of the delaminations. Active infrared thermography was conducted on a segment of an out-of-service prestressed box beam and cores were extracted from several locations on the beam to validate the results. This study confirms the feasibility of the application of active infrared thermography on concrete bridges and of estimating the size and depth of delaminations. From the results gathered in this dissertation, it was established that applying both passive and active thermography can provide transportation agencies with qualitative and quantitative measures for efficient maintenance and repair decision-making.
Resumo:
In a statistical inference scenario, the estimation of target signal or its parameters is done by processing data from informative measurements. The estimation performance can be enhanced if we choose the measurements based on some criteria that help to direct our sensing resources such that the measurements are more informative about the parameter we intend to estimate. While taking multiple measurements, the measurements can be chosen online so that more information could be extracted from the data in each measurement process. This approach fits well in Bayesian inference model often used to produce successive posterior distributions of the associated parameter. We explore the sensor array processing scenario for adaptive sensing of a target parameter. The measurement choice is described by a measurement matrix that multiplies the data vector normally associated with the array signal processing. The adaptive sensing of both static and dynamic system models is done by the online selection of proper measurement matrix over time. For the dynamic system model, the target is assumed to move with some distribution and the prior distribution at each time step is changed. The information gained through adaptive sensing of the moving target is lost due to the relative shift of the target. The adaptive sensing paradigm has many similarities with compressive sensing. We have attempted to reconcile the two approaches by modifying the observation model of adaptive sensing to match the compressive sensing model for the estimation of a sparse vector.
Resumo:
Fuzzy community detection is to identify fuzzy communities in a network, which are groups of vertices in the network such that the membership of a vertex in one community is in [0,1] and that the sum of memberships of vertices in all communities equals to 1. Fuzzy communities are pervasive in social networks, but only a few works have been done for fuzzy community detection. Recently, a one-step forward extension of Newman’s Modularity, the most popular quality function for disjoint community detection, results into the Generalized Modularity (GM) that demonstrates good performance in finding well-known fuzzy communities. Thus, GMis chosen as the quality function in our research. We first propose a generalized fuzzy t-norm modularity to investigate the effect of different fuzzy intersection operators on fuzzy community detection, since the introduction of a fuzzy intersection operation is made feasible by GM. The experimental results show that the Yager operator with a proper parameter value performs better than the product operator in revealing community structure. Then, we focus on how to find optimal fuzzy communities in a network by directly maximizing GM, which we call it Fuzzy Modularity Maximization (FMM) problem. The effort on FMM problem results into the major contribution of this thesis, an efficient and effective GM-based fuzzy community detection method that could automatically discover a fuzzy partition of a network when it is appropriate, which is much better than fuzzy partitions found by existing fuzzy community detection methods, and a crisp partition of a network when appropriate, which is competitive with partitions resulted from the best disjoint community detections up to now. We address FMM problem by iteratively solving a sub-problem called One-Step Modularity Maximization (OSMM). We present two approaches for solving this iterative procedure: a tree-based global optimizer called Find Best Leaf Node (FBLN) and a heuristic-based local optimizer. The OSMM problem is based on a simplified quadratic knapsack problem that can be solved in linear time; thus, a solution of OSMM can be found in linear time. Since the OSMM algorithm is called within FBLN recursively and the structure of the search tree is non-deterministic, we can see that the FMM/FBLN algorithm runs in a time complexity of at least O (n2). So, we also propose several highly efficient and very effective heuristic algorithms namely FMM/H algorithms. We compared our proposed FMM/H algorithms with two state-of-the-art community detection methods, modified MULTICUT Spectral Fuzzy c-Means (MSFCM) and Genetic Algorithm with a Local Search strategy (GALS), on 10 real-world data sets. The experimental results suggest that the H2 variant of FMM/H is the best performing version. The H2 algorithm is very competitive with GALS in producing maximum modularity partitions and performs much better than MSFCM. On all the 10 data sets, H2 is also 2-3 orders of magnitude faster than GALS. Furthermore, by adopting a simply modified version of the H2 algorithm as a mutation operator, we designed a genetic algorithm for fuzzy community detection, namely GAFCD, where elite selection and early termination are applied. The crossover operator is designed to make GAFCD converge fast and to enhance GAFCD’s ability of jumping out of local minimums. Experimental results on all the data sets show that GAFCD uncovers better community structure than GALS.
Resumo:
Analyzing “nuggety” gold samples commonly produces erratic fire assay results, due to random inclusion or exclusion of coarse gold in analytical samples. Preconcentrating gold samples might allow the nuggets to be concentrated and fire assayed separately. In this investigation synthetic gold samples were made using similar density tungsten powder and silica, and were preconcentrated using two approaches: an air jig and an air classifier. Current analytical gold sampling method is time and labor intensive and our aim is to design a set-up for rapid testing. It was observed that the preliminary air classifier design showed more promise than the air jig in terms of control over mineral recovery and preconcentrating bulk ore sub-samples. Hence the air classifier was modified with the goal of producing 10-30 grams samples aiming to capture all of the high density metallic particles, tungsten in this case. Effects of air velocity and feed rate on the recovery of tungsten from synthetic tungsten-silica mixtures were studied. The air classifier achieved optimal high density metal recovery of 97.7% at an air velocity of 0.72 m/s and feed rate of 160 g/min. Effects of density on classification were investigated by using iron as the dense metal instead of tungsten and the recovery was seen to drop from 96.13% to 20.82%. Preliminary investigations suggest that preconcentration of gold samples is feasible using the laboratory designed air classifier.
Resumo:
Practice is subject to increasing pressure to demonstrate its ability to achieve outcomes required by public policy makers. As part of this process social work practice has to engage with issues around advancing knowledge-based learning processes in a close collaboration with education and research based perspectives. This has given rise to approaches seeking to combine research methodology, field research and practical experience. Practice research is connected to both “the science of the concrete” – a field of research oriented towards subjects more than objects and “mode 2 knowledge production” – an application-oriented research where frameworks and findings are discussed by a number of partners. Practice research is defined into two approaches: practice research – collaboration between practice and research – and practitioner research – processes controlled and accomplished by practitioners. The basic stakeholders in practice research are social workers, service users, administrators, management, organisations, politicians and researchers. Accordingly, practice research is necessarily collaborative, involving a meeting point for different views, interests and needs, where complexity and dilemmas are inherent. Instead of attempting to balance or reconcile these differences, it is important to respect the differences if collaboration is to be established. The strength of both practice and research in practice research is to address these difficult challenges. The danger for both fields is to avoid and reject them.
Resumo:
Geometrical dependencies are being researched for analytical representation of the probability density function (pdf) for the travel time between a random, and a known or another random point in Tchebyshev’s metric. In the most popular case - a rectangular area of service - the pdf of this random variable depends directly on the position of the server. Two approaches have been introduced for the exact analytical calculation of the pdf: Ad-hoc approach – useful for a ‘manual’ solving of a specific case; by superposition – an algorithmic approach for the general case. The main concept of each approach is explained, and a short comparison is done to prove the faithfulness.
Resumo:
Radon plays an important role for human exposure to natural sources of ionizing radiation. The aim of this article is to compare two approaches to estimate mean radon exposure in the Swiss population: model-based predictions at individual level and measurement-based predictions based on measurements aggregated at municipality level. A nationwide model was used to predict radon levels in each household and for each individual based on the corresponding tectonic unit, building age, building type, soil texture, degree of urbanization, and floor. Measurement-based predictions were carried out within a health impact assessment on residential radon and lung cancer. Mean measured radon levels were corrected for the average floor distribution and weighted with population size of each municipality. Model-based predictions yielded a mean radon exposure of the Swiss population of 84.1 Bq/m(3) . Measurement-based predictions yielded an average exposure of 78 Bq/m(3) . This study demonstrates that the model- and the measurement-based predictions provided similar results. The advantage of the measurement-based approach is its simplicity, which is sufficient for assessing exposure distribution in a population. The model-based approach allows predicting radon levels at specific sites, which is needed in an epidemiological study, and the results do not depend on how the measurement sites have been selected.
Resumo:
Localized short-echo-time (1)H-MR spectra of human brain contain contributions of many low-molecular-weight metabolites and baseline contributions of macromolecules. Two approaches to model such spectra are compared and the data acquisition sequence, optimized for reproducibility, is presented. Modeling relies on prior knowledge constraints and linear combination of metabolite spectra. Investigated was what can be gained by basis parameterization, i.e., description of basis spectra as sums of parametric lineshapes. Effects of basis composition and addition of experimentally measured macromolecular baselines were investigated also. Both fitting methods yielded quantitatively similar values, model deviations, error estimates, and reproducibility in the evaluation of 64 spectra of human gray and white matter from 40 subjects. Major advantages of parameterized basis functions are the possibilities to evaluate fitting parameters separately, to treat subgroup spectra as independent moieties, and to incorporate deviations from straightforward metabolite models. It was found that most of the 22 basis metabolites used may provide meaningful data when comparing patient cohorts. In individual spectra, sums of closely related metabolites are often more meaningful. Inclusion of a macromolecular basis component leads to relatively small, but significantly different tissue content for most metabolites. It provides a means to quantitate baseline contributions that may contain crucial clinical information.