36 resultados para BIOdiversity Monitoring Transect Analysis in Africa


Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is well known that many neurological diseases leave a fingerprint in voice and speech production. The dramatic impact of these pathologies in life quality is a growing concert. Many techniques have been designed for the detection, diagnose and monitoring the neurological disease. Most of them are costly or difficult to extend to primary services. The present paper shows that some neurological diseases can be traced a the level of voice production. The detection procedure would be based on a simple voice test. The availability of advanced tools and methodologies to monitor the organic pathology of voice would facilitate the implantation of these tests. The paper hypothesizes some of the underlying mechanisms affecting the production of voice and presents a general description of the methodological foundations for the voice analysis system which can estimate correlates to the neurological disease. A case of study is presented from spasmodic dysphonia to illustrate the possibilities of the methodology to monitor other neurological problems as well.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The acquisition of technical, contextual and behavioral competences is a prerequisite for sustainable development and strengthening of rural communities. Territorial display of the status of these skills helps to design the necessary learning, so its inclusion in planning processes is useful for decision making. The article discusses the application of visual representation of competences in a rural development project with Aymara women communities in Peru. The results show an improvement of transparency and dialogue, resulting in a more successful project management and strengthening of social organization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An important step to assess water availability is to have monthly time series representative of the current situation. In this context, a simple methodology is presented for application in large-scale studies in regions where a properly calibrated hydrologic model is not available, using the output variables simulated by regional climate models (RCMs) of the European project PRUDENCE under current climate conditions (period 1961–1990). The methodology compares different interpolation methods and alternatives to generate annual times series that minimise the bias with respect to observed values. The objective is to identify the best alternative to obtain bias-corrected, monthly runoff time series from the output of RCM simulations. This study uses information from 338 basins in Spain that cover the entire mainland territory and whose observed values of natural runoff have been estimated by the distributed hydrological model SIMPA. Four interpolation methods for downscaling runoff to the basin scale from 10 RCMs are compared with emphasis on the ability of each method to reproduce the observed behaviour of this variable. The alternatives consider the use of the direct runoff of the RCMs and the mean annual runoff calculated using five functional forms of the aridity index, defined as the ratio between potential evapotranspiration and precipitation. In addition, the comparison with respect to the global runoff reference of the UNH/GRDC dataset is evaluated, as a contrast of the “best estimator” of current runoff on a large scale. Results show that the bias is minimised using the direct original interpolation method and the best alternative for bias correction of the monthly direct runoff time series of RCMs is the UNH/GRDC dataset, although the formula proposed by Schreiber (1904) also gives good results

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Around ten years ago investigation of technical and material construction in Ancient Roma has advanced in favour to obtain positive results. This process has been directed to obtaining some dates based in chemical composition, also action and reaction of materials against meteorological assaults or post depositional displacements. Plenty of these dates should be interpreted as a result of deterioration and damage in concrete material made in one landscape with some kind of meteorological characteristics. Concrete mixture like calcium and gypsum mortars should be analysed in laboratory test programs, and not only with descriptions based in reference books of Strabo, Pliny the Elder or Vitruvius. Roman manufacture was determined by weather condition, landscape, natural resources and of course, economic situation of the owner. In any case we must research the work in every facts of construction. On the one hand, thanks to chemical techniques like X-ray diffraction and Optical microscopy, we could know the granular disposition of mixture. On the other hand if we develop physical and mechanical techniques like compressive strength, capillary absorption on contact or water behaviour, we could know the reactions in binder and aggregates against weather effects. However we must be capable of interpret these results. Last year many analyses developed in archaeological sites in Spain has contributed to obtain different point of view, so has provide new dates to manage one method to continue the investigation of roman mortars. If we developed chemical and physical analysis in roman mortars at the same time, and we are capable to interpret the construction and the resources used, we achieve to understand the process of construction, the date and also the way of restoration in future.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Memory analysis techniques have become sophisticated enough to model, with a high degree of accuracy, the manipulation of simple memory structures (finite structures, single/double linked lists and trees). However, modern programming languages provide extensive library support including a wide range of generic collection objects that make use of complex internal data structures. While these data structures ensure that the collections are efficient, often these representations cannot be effectively modeled by existing methods (either due to excessive analysis runtime or due to the inability to represent the required information). This paper presents a method to represent collections using an abstraction of their semantics. The construction of the abstract semantics for the collection objects is done in a manner that allows individual elements in the collections to be identified. Our construction also supports iterators over the collections and is able to model the position of the iterators with respect to the elements in the collection. By ordering the contents of the collection based on the iterator position, the model can represent a notion of progress when iteratively manipulating the contents of a collection. These features allow strong updates to the individual elements in the collection as well as strong updates over the collections themselves.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a study of the effectiveness of global analysis in the parallelization of logic programs using strict independence. A number of well-known approximation domains are selected and tlieir usefulness for the application in hand is explained. Also, methods for using the information provided by such domains to improve parallelization are proposed. Local and global analyses are built using these domains and such analyses are embedded in a complete parallelizing compiler. Then, the performance of the domains (and the system in general) is assessed for this application through a number of experiments. We argüe that the results offer significant insight into the characteristics of these domains, the demands of the application, and the tradeoffs involved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

While logic programming languages offer a great deal of scope for parallelism, there is usually some overhead associated with the execution of goals in parallel because of the work involved in task creation and scheduling. In practice, therefore, the "granularity" of a goal, i.e. an estimate of the work available under it, should be taken into account when deciding whether or not to execute a goal concurrently as a sepárate task. This paper describes a method for estimating the granularity of a goal at compile time. The runtime overhead associated with our approach is usually quite small, and the performance improvements resulting from the incorporation of grainsize control can be quite good. This is shown by means of experimental results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Set-Sharing analysis, the classic Jacobs and Langen's domain, has been widely used to infer several interesting properties of programs at compile-time such as occurs-check reduction, automatic parallelization, flnite-tree analysis, etc. However, performing abstract uniflcation over this domain implies the use of a closure operation which makes the number of sharing groups grow exponentially. Much attention has been given in the literature to mitígate this key inefficiency in this otherwise very useful domain. In this paper we present two novel alternative representations for the traditional set-sharing domain, tSH and tNSH. which compress efficiently the number of elements into fewer elements enabling more efficient abstract operations, including abstract uniflcation, without any loss of accuracy. Our experimental evaluation supports that both representations can reduce dramatically the number of sharing groups showing they can be more practical solutions towards scalable set-sharing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conductance interaction identification by means of Boltzmann distribution and mutual information analysis in conductance-based neuron models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Idea Management Systems are an implementation of open innovation notion in the Web environment with the use of crowdsourcing techniques. In this area, one of the popular methods for coping with large amounts of data is duplicate de- tection. With our research, we answer a question if there is room to introduce more relationship types and in what degree would this change affect the amount of idea metadata and its diversity. Furthermore, based on hierarchical dependencies between idea relationships and relationship transitivity we propose a number of methods for dataset summarization. To evaluate our hypotheses we annotate idea datasets with new relationships using the contemporary methods of Idea Management Systems to detect idea similarity. Having datasets with relationship annotations at our disposal, we determine if idea features not related to idea topic (e.g. innovation size) have any relation to how annotators perceive types of idea similarity or dissimilarity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Low resources in many African locations do not allow many African scientists and physicians to access the latest advances in technology. This deficiency hinders the daily life of African professionals that often cannot afford, for instance, the cost of internet fees or software licenses. The AFRICA BUILD project, funded by the European Commission and formed by four European and four African institutions, intends to provide advanced computational tools to African institutions in order to solve current technological limitations. In the context of AFRICA BUILD we have carried out, a series of experiments to test the feasibility of using Cloud Computing technologies in two different locations in Africa: Egypt and Burundi. The project aims to create a virtual platform to provide access to a wide range of biomedical informatics and learning resources to professionals and researchers in Africa.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Access to information and continuous education represent critical factors for physicians and researchers over the world. For African professionals, this situation is even more problematic due to the frequently difficult access to technological infrastructures and basic information. Both education and information technologies (e.g., including hardware, software or networking) are expensive and unaffordable for many African professionals. Thus, the use of e-learning and an open approach to information exchange and software use have been already proposed to improve medical informatics issues in Africa. In this context, the AFRICA BUILD project, supported by the European Commission, aims to develop a virtual platform to provide access to a wide range of biomedical informatics and learning resources to professionals and researchers in Africa. A consortium of four African and four European partners work together in this initiative. In this framework, we have developed a prototype of a cloud-computing infrastructure to demonstrate, as a proof of concept, the feasibility of this approach. We have conducted the experiment in two different locations in Africa: Burundi and Egypt. As shown in this paper, technologies such as cloud computing and the use of open source medical software for a large range of case present significant challenges and opportunities for developing countries, such as many in Africa.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

EU biofuels support Biofuels modelling with CAPRI Scenario setting Main results Concluding remarks Biofuels production and use will remain mainly driven by public support Strong links of biofuels to agricultural markets Development of second generation technologies would ease food-fuel links

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is now an emerging need for an efficient modeling strategy to develop a new generation of monitoring systems. One method of approaching the modeling of complex processes is to obtain a global model. It should be able to capture the basic or general behavior of the system, by means of a linear or quadratic regression, and then superimpose a local model on it that can capture the localized nonlinearities of the system. In this paper, a novel method based on a hybrid incremental modeling approach is designed and applied for tool wear detection in turning processes. It involves a two-step iterative process that combines a global model with a local model to take advantage of their underlying, complementary capacities. Thus, the first step constructs a global model using a least squares regression. A local model using the fuzzy k-nearest-neighbors smoothing algorithm is obtained in the second step. A comparative study then demonstrates that the hybrid incremental model provides better error-based performance indices for detecting tool wear than a transductive neurofuzzy model and an inductive neurofuzzy model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hydromorphic Podzol soils in the Amazon Basin generally support low-stature forests with some of the lowest amounts of aboveground net primary production (NPP) in the region. However, they can also exhibit large values of belowground NPP that can contribute significantly to the total annual inputs of organic matter into the soil. These hydromorphic Podzol soils also exhibit a horizon rich in organic matter at around 1?2m depth, presumably as a result of eluviation of dissolved organic matter and sesquioxides of Fe and Al. Therefore, it is likely that these ecosystems store large quantities of carbon by (1) large amounts of C inputs to soils dominated by their high levels of fine-root production, (2) stabilization of organic matter in an illuviation horizon due to significant vertical transfers of C. To assess these ideas we studied soil carbon dynamics using radiocarbon in two adjacent Amazon forests growing on contrasting soils: a hydromorphic Podzol and a well-drained Alisol supporting a high-stature terra firme forest. Our measurements showed similar concentrations of C and radiocarbon in the litter layer and the first 5 cm of the mineral soil for both sites. This result is consistent with the idea that the hydromorphic Podzol soil has similar soil C storage and cycling rates compared to the well-drained Alisol that supports a more opulent vegetation. However, we found important differences in carbon dynamics and transfers along the vertical profile. At both soils, we found similar radiocarbon concentrations in the subsoil, but the carbon released after incubating soil samples presented radiocarbon concentrations of recent origin in the Alisol, but not in the Podzol. There were no indications of incorporation of C fixed after 1950 in the illuvial horizon of the Podzol. With the aid of a simulation model, we predicted that only a minor fraction (1.7 %) of the labile carbon decomposed in the topsoil is transferred to the subsoil of the Podzol, while this proportional transfer is about 30% in the Alisol. Furthermore, our estimates were 8 times lower than previous estimations of vertical C transfers in Amazon Podzols, and question the validity of these previous estimations for all Podzols within the Amazon Basin. Our results also challenge our previous ideas about the genesis of these particular soils and suggest that either they are not true Podzols or the podzolization processes had already stopped.