903 resultados para ease of access


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Health Department of Sao Paulo, Brazil, has developed a Health Necessities Index (HNI) to identify priority areas for providing health assistance. In 2008, a survey of the status of oral health was conducted. The objective of this ecological study was to analyze the status of oral health in relation to the HNI. The variables, stratified by the age of 5, 12 and 15 years old were: percentage of individuals with difficulty of access to dental care services; DMFT and DMFS; prevalence of the need for tooth extraction and treatment of dental caries. Data were analyzed for the 25 Health Technical Supervision Units (HTS). The Statistical Covariance Test was used as well as the Pearson correlation coefficient and linear regression model. A positive correlation was observed between high scores of the HNI and difficulty of access to services. In the HTS with high scores of HNI a higher incidence of dental caries was observed, a greater need for tooth extractions and low caries-free incidence. In order to improve health conditions of the population it is mandatory to prioritize actions in areas of social deprivation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Visual analysis of social networks is usually based on graph drawing algorithms and tools. However, social networks are a special kind of graph in the sense that interpretation of displayed relationships is heavily dependent on context. Context, in its turn, is given by attributes associated with graph elements, such as individual nodes, edges, and groups of edges, as well as by the nature of the connections between individuals. In most systems, attributes of individuals and communities are not taken into consideration during graph layout, except to derive weights for force-based placement strategies. This paper proposes a set of novel tools for displaying and exploring social networks based on attribute and connectivity mappings. These properties are employed to layout nodes on the plane via multidimensional projection techniques. For the attribute mapping, we show that node proximity in the layout corresponds to similarity in attribute, leading to easiness in locating similar groups of nodes. The projection based on connectivity yields an initial placement that forgoes force-based or graph analysis algorithm, reaching a meaningful layout in one pass. When a force algorithm is then applied to this initial mapping, the final layout presents better properties than conventional force-based approaches. Numerical evaluations show a number of advantages of pre-mapping points via projections. User evaluation demonstrates that these tools promote ease of manipulation as well as fast identification of concepts and associations which cannot be easily expressed by conventional graph visualization alone. In order to allow better space usage for complex networks, a graph mapping on the surface of a sphere is also implemented.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The wide use of e-technologies represents a great opportunity for underserved segments of the population, especially with the aim of reintegrating excluded individuals back into society through education. This is particularly true for people with different types of disabilities who may have difficulties while attending traditional on-site learning programs that are typically based on printed learning resources. The creation and provision of accessible e-learning contents may therefore become a key factor in enabling people with different access needs to enjoy quality learning experiences and services. Another e-learning challenge is represented by m-learning (which stands for mobile learning), which is emerging as a consequence of mobile terminals diffusion and provides the opportunity to browse didactical materials everywhere, outside places that are traditionally devoted to education. Both such situations share the need to access materials in limited conditions and collide with the growing use of rich media in didactical contents, which are designed to be enjoyed without any restriction. Nowadays, Web-based teaching makes great use of multimedia technologies, ranging from Flash animations to prerecorded video-lectures. Rich media in e-learning can offer significant potential in enhancing the learning environment, through helping to increase access to education, enhance the learning experience and support multiple learning styles. Moreover, they can often be used to improve the structure of Web-based courses. These highly variegated and structured contents may significantly improve the quality and the effectiveness of educational activities for learners. For example, rich media contents allow us to describe complex concepts and process flows. Audio and video elements may be utilized to add a “human touch” to distance-learning courses. Finally, real lectures may be recorded and distributed to integrate or enrich on line materials. A confirmation of the advantages of these approaches can be seen in the exponential growth of video-lecture availability on the net, due to the ease of recording and delivering activities which take place in a traditional classroom. Furthermore, the wide use of assistive technologies for learners with disabilities injects new life into e-learning systems. E-learning allows distance and flexible educational activities, thus helping disabled learners to access resources which would otherwise present significant barriers for them. For instance, students with visual impairments have difficulties in reading traditional visual materials, deaf learners have trouble in following traditional (spoken) lectures, people with motion disabilities have problems in attending on-site programs. As already mentioned, the use of wireless technologies and pervasive computing may really enhance the educational learner experience by offering mobile e-learning services that can be accessed by handheld devices. This new paradigm of educational content distribution maximizes the benefits for learners since it enables users to overcome constraints imposed by the surrounding environment. While certainly helpful for users without disabilities, we believe that the use of newmobile technologies may also become a fundamental tool for impaired learners, since it frees them from sitting in front of a PC. In this way, educational activities can be enjoyed by all the users, without hindrance, thus increasing the social inclusion of non-typical learners. While the provision of fully accessible and portable video-lectures may be extremely useful for students, it is widely recognized that structuring and managing rich media contents for mobile learning services are complex and expensive tasks. Indeed, major difficulties originate from the basic need to provide a textual equivalent for each media resource composing a rich media Learning Object (LO). Moreover, tests need to be carried out to establish whether a given LO is fully accessible to all kinds of learners. Unfortunately, both these tasks are truly time-consuming processes, depending on the type of contents the teacher is writing and on the authoring tool he/she is using. Due to these difficulties, online LOs are often distributed as partially accessible or totally inaccessible content. Bearing this in mind, this thesis aims to discuss the key issues of a system we have developed to deliver accessible, customized or nomadic learning experiences to learners with different access needs and skills. To reduce the risk of excluding users with particular access capabilities, our system exploits Learning Objects (LOs) which are dynamically adapted and transcoded based on the specific needs of non-typical users and on the barriers that they can encounter in the environment. The basic idea is to dynamically adapt contents, by selecting them from a set of media resources packaged in SCORM-compliant LOs and stored in a self-adapting format. The system schedules and orchestrates a set of transcoding processes based on specific learner needs, so as to produce a customized LO that can be fully enjoyed by any (impaired or mobile) student.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The quality of temperature and humidity retrievals from the infrared SEVIRI sensors on the geostationary Meteosat Second Generation (MSG) satellites is assessed by means of a one dimensional variational algorithm. The study is performed with the aim of improving the spatial and temporal resolution of available observations to feed analysis systems designed for high resolution regional scale numerical weather prediction (NWP) models. The non-hydrostatic forecast model COSMO (COnsortium for Small scale MOdelling) in the ARPA-SIM operational configuration is used to provide background fields. Only clear sky observations over sea are processed. An optimised 1D–VAR set-up comprising of the two water vapour and the three window channels is selected. It maximises the reduction of errors in the model backgrounds while ensuring ease of operational implementation through accurate bias correction procedures and correct radiative transfer simulations. The 1D–VAR retrieval quality is firstly quantified in relative terms employing statistics to estimate the reduction in the background model errors. Additionally the absolute retrieval accuracy is assessed comparing the analysis with independent radiosonde and satellite observations. The inclusion of satellite data brings a substantial reduction in the warm and dry biases present in the forecast model. Moreover it is shown that the retrieval profiles generated by the 1D–VAR are well correlated with the radiosonde measurements. Subsequently the 1D–VAR technique is applied to two three–dimensional case–studies: a false alarm case–study occurred in Friuli–Venezia–Giulia on the 8th of July 2004 and a heavy precipitation case occurred in Emilia–Romagna region between 9th and 12th of April 2005. The impact of satellite data for these two events is evaluated in terms of increments in the integrated water vapour and saturation water vapour over the column, in the 2 meters temperature and specific humidity and in the surface temperature. To improve the 1D–VAR technique a method to calculate flow–dependent model error covariance matrices is also assessed. The approach employs members from an ensemble forecast system generated by perturbing physical parameterisation schemes inside the model. The improved set–up applied to the case of 8th of July 2004 shows a substantial neutral impact.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis is concerned with the role played by software tools in the analysis and dissemination of linguistic corpora and their contribution to a more widespread adoption of corpora in different fields. Chapter 1 contains an overview of some of the most relevant corpus analysis tools available today, presenting their most interesting features and some of their drawbacks. Chapter 2 begins with an explanation of the reasons why none of the available tools appear to satisfy the requirements of the user community and then continues with technical overview of the current status of the new system developed as part of this work. This presentation is followed by highlights of features that make the system appealing to users and corpus builders (i.e. scholars willing to make their corpora available to the public). The chapter concludes with an indication of future directions for the projects and information on the current availability of the software. Chapter 3 describes the design of an experiment devised to evaluate the usability of the new system in comparison to another corpus tool. Usage of the tool was tested in the context of a documentation task performed on a real assignment during a translation class in a master's degree course. In chapter 4 the findings of the experiment are presented on two levels of analysis: firstly a discussion on how participants interacted with and evaluated the two corpus tools in terms of interface and interaction design, usability and perceived ease of use. Then an analysis follows of how users interacted with corpora to complete the task and what kind of queries they submitted. Finally, some general conclusions are drawn and areas for future work are outlined.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Pervasive Sensing is a recent research trend that aims at providing widespread computing and sensing capabilities to enable the creation of smart environments that can sense, process, and act by considering input coming from both people and devices. The capabilities necessary for Pervasive Sensing are nowadays available on a plethora of devices, from embedded devices to PCs and smartphones. The wide availability of new devices and the large amount of data they can access enable a wide range of novel services in different areas, spanning from simple data collection systems to socially-aware collaborative filtering. However, the strong heterogeneity and unreliability of devices and sensors poses significant challenges. So far, existing works on Pervasive Sensing have focused only on limited portions of the whole stack of available devices and data that they can use, to propose and develop mainly vertical solutions. The push from academia and industry for this kind of services shows that time is mature for a more general support framework for Pervasive Sensing solutions able to enhance frail architectures, promote a well balanced usage of resources on different devices, and enable the widest possible access to sensed data, while ensuring a minimal energy consumption on battery-operated devices. This thesis focuses on pervasive sensing systems to extract design guidelines as foundation of a comprehensive reference model for multi-tier Pervasive Sensing applications. The validity of the proposed model is tested in five different scenarios that present peculiar and different requirements, and different hardware and sensors. The ease of mapping from the proposed logical model to the real implementations and the positive performance result campaigns prove the quality of the proposed approach and offer a reliable reference model, together with a direction for the design and deployment of future Pervasive Sensing applications.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Modern embedded systems embrace many-core shared-memory designs. Due to constrained power and area budgets, most of them feature software-managed scratchpad memories instead of data caches to increase the data locality. It is therefore programmers’ responsibility to explicitly manage the memory transfers, and this make programming these platform cumbersome. Moreover, complex modern applications must be adequately parallelized before they can the parallel potential of the platform into actual performance. To support this, programming languages were proposed, which work at a high level of abstraction, and rely on a runtime whose cost hinders performance, especially in embedded systems, where resources and power budget are constrained. This dissertation explores the applicability of the shared-memory paradigm on modern many-core systems, focusing on the ease-of-programming. It focuses on OpenMP, the de-facto standard for shared memory programming. In a first part, the cost of algorithms for synchronization and data partitioning are analyzed, and they are adapted to modern embedded many-cores. Then, the original design of an OpenMP runtime library is presented, which supports complex forms of parallelism such as multi-level and irregular parallelism. In the second part of the thesis, the focus is on heterogeneous systems, where hardware accelerators are coupled to (many-)cores to implement key functional kernels with orders-of-magnitude of speedup and energy efficiency compared to the “pure software” version. However, three main issues rise, namely i) platform design complexity, ii) architectural scalability and iii) programmability. To tackle them, a template for a generic hardware processing unit (HWPU) is proposed, which share the memory banks with cores, and the template for a scalable architecture is shown, which integrates them through the shared-memory system. Then, a full software stack and toolchain are developed to support platform design and to let programmers exploiting the accelerators of the platform. The OpenMP frontend is extended to interact with it.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Geochemical mapping is a valuable tool for the control of territory that can be used not only in the identification of mineral resources and geological, agricultural and forestry studies but also in the monitoring of natural resources by giving solutions to environmental and economic problems. Stream sediments are widely used in the sampling campaigns carried out by the world's governments and research groups for their characteristics of broad representativeness of rocks and soils, for ease of sampling and for the possibility to conduct very detailed sampling In this context, the environmental role of stream sediments provides a good basis for the implementation of environmental management measures, in fact the composition of river sediments is an important factor in understanding the complex dynamics that develop within catchment basins therefore they represent a critical environmental compartment: they can persistently incorporate pollutants after a process of contamination and release into the biosphere if the environmental conditions change. It is essential to determine whether the concentrations of certain elements, in particular heavy metals, can be the result of natural erosion of rocks containing high concentrations of specific elements or are generated as residues of human activities related to a certain study area. This PhD thesis aims to extract from an extensive database on stream sediments of the Romagna rivers the widest spectrum of informations. The study involved low and high order stream in the mountain and hilly area, but also the sediments of the floodplain area, where intensive agriculture is active. The geochemical signals recorded by the stream sediments will be interpreted in order to reconstruct the natural variability related to bedrock and soil contribution, the effects of the river dynamics, the anomalous sites, and with the calculation of background values be able to evaluate their level of degradation and predict the environmental risk.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Novel single step synthetic procedure for hydrophobically modified alkali soluble latexes (HASE) via a miniemulsion-analogous method is presented. This facile method simplifies the copolymerization of the monomers with basically “opposite” character in terms of their hydrophilic/hydrophobic nature, which represent one of the main challenges in water based systems. Considered systems do not represent classical miniemulsions due to a high content of water soluble monomers. However, the polymerization mechanism was found to be rather similar to miniemulsion polymerization process.rnThe influence of the different factors on the system stability has been investigated. The copolymerization behavior studies typically showed strong composition drifts during copolymerization. It was found that the copolymer composition drift can be suppressed via changing the initial monomer ratio.rnThe neutralization behavior of the obtained HASE systems was investigated via potentiometric titration. The rheological behavior of the obtained systems as a function of the different parameters, such as pH, composition (ultrahydrophobe content) and additive type and content has also been investigated.rnDetailed investigation of the storage and loss moduli, damping factor and the crossover frequencies of the samples showed that at the initial stages of the neutralization the systems show microgel-like behavior.rnThe dependence of the rheological properties on the content and the type of the ultrahydrophobe showed that the tuning of the mechanical properties can be easily achieved via minor (few percent) but significant changes in the content of the latter. Besides, changing the hydrophobicity of the ultrahydrophobe via increasing the carbon chain length represents another simple method for achieving the same results.rnThe influence of amphiphilic additives (especially alcohols) on the rheological behavior of the obtained systems has been studied. An analogy was made between micellation of surfactants and the formation of hydrophobic domains between hydrophobic groups of the polymer side chain.rnDilution induced viscosity reduction was investigated in different systems, without or with different amounts or types of the amphiphilic additive. Possibility of the controlled response to dilution was explored. It was concluded that the sensitivity towards dilution can be reduced, and in extreme cases even the increase of the dynamic modulus can be observed, which is of high importance for the setting behavior of the adhesive material.rnIn the last part of this work, the adhesive behavior of the obtained HASE systems was investigated on different substrates (polypropylene and glass) for the standard labeling paper. Wet tack and setting behavior was studied and the trends for possible applications have been evaluated.rnThe novel synthetic procedure, investigation of rheological properties and the possibility of the tuning via additives, investigated in this work create a firm background for the development of the HASE based adhesives as well as rheology modifiers with vast variety of possible applications due to ease of tuning the mechanical and rheological properties of the systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Because the recommendation to use flowables for posterior restorations is still a matter of debate, the objective of this study was to determine in a nationwide survey in Germany how frequently, for what indications, and for what reasons, German dentists use flowable composites in posterior teeth. In addition, the acceptance of a simplified filling technique for posterior restorations using a low stress flowable composite was evaluated. Completed questionnaires from all over Germany were returned by 1,449 dentists resulting in a response rate of 48.5%; 78.6% of whom regularly used flowable composites for posterior restorations. The most frequent indications were cavity lining (80.1%) and small Class I fillings (74.2%). Flowables were less frequently used for small Class II fillings (22.7%) or other indications (13.6%). Most frequent reasons given for the use of flowables in posterior teeth were the prevention of voids (71.7%) and superior adaptation to cavity walls (72.9%), whereas saving time was considered less important (13.8%). Based on the subjective opinion of the dentists the simplified filling technique seemed to deliver advantages compared to the methods used to date particularly with regard to good cavity adaptation and ease of use. In conclusion, resin composites are the standard material type used for posterior restorations by general dental practitioners in Germany and most dentists use flowable composites as liners.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Access and accessibility are important determinants of people’s ability to utilise natural resources, and have a strong impact on household welfare. Physical accessibility of natural resources, on the other hand, has generally been regarded as one of the most important drivers of land-use and land-cover changes. Based on two case studies, this article discusses evidence of the impact of access to services and access to natural resources on household poverty and on the environment. We show that socio-cultural distances are a key limiting factor for gaining access to services, and thereby for improved household welfare. We also discuss the impact of socio-cultural distances on access to natural resources, and show that large-scale commercial exploitation of natural resources tends to occur beyond the spatial reach of socio-culturally and economically marginalised population segments. We conclude that it is essential to pay more attention to improving the structural environment that presently leaves social minority groups marginalised. Innovative approaches that use natural resource management to induce poverty reduction – for example, through compensation of local farmers for environmental services – appear to be promising avenues that can lead to integration of the objectives of poverty reduction and sustainable environmental stewardship.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The study of musical timbre by Bailes (2007) raises important questions concerning the relative ease of imaging complex perceptual attributes such as timbre, compared to more unidimensional attributes. I also raise the issue of individual differences in auditory imagery ability, especially for timbre.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Conservation agriculture that focuses on soil recovery is both economically and environmentally sustainable. This lies in contrast with many of the current agricultural practices, which push for high production, which, in turn lead to over-depletion of the soil. Agricultural interest groups play a role in crafting farming policies with governmental officials. Therefore, my study examined three interest group types agribusinesses, farmer organizations, and environmental NGOs that seek to influence agricultural policy, specifically focusing on the federal farm bill, due to its large impact throughout the nation. The research in which data wasgathered through subject interviews, a literature review, and databases found that access to governmental officials affects the amount of influence a group can have. Access is contingent upon: 1) the number of networks (social, professional, and political), 2) amount of money spent through campaign contributions and lobbying expenditures, and 3) extent of business enterprises and subsidiaries. The evidence shows that there is a correlation between these variables and the extent of access. My research concludes that agribusiness interest groups have the most access to government officials, and thus have the greatest influence on agricultural policies. Because agribusinesses support subsidies of commodity-crops this indirectly impacts conservation agriculture, as the two programs compete in a zero-sum game for funding in the farm bills.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

More than 250,000 hip fractures occur annually in the United States and the most common fracture location is the femoral neck, the weakest region of the femur. Hip fixation surgery is conducted to repair hip fractures by using a Kirschner (K-) wire as a temporary guide for permanent bone screws. Variation has been observed in the force required to extract the K-wire from the femoral head during surgery. It is hypothesized that a relationship exists between the K-wire pullout force and the bone quality at the site of extraction. Currently, bone mineral density (BMD) is used as a predictor for bone quality and strength. However, BMD characterizes the entire skeletal system and does not account for localized bone quality and factors such as lifestyle, nutrition, and drug use. A patient’s BMD may not accurately describe the quality of bone at the site of fracture. This study aims to investigate a correlation between the force required to extract a K-wire from femoral head specimens and the quality of bone. A procedure to measure K-wire pullout force was developed and tested with pig femoral head specimens. The procedure was implemented on 8 human osteoarthritic femoral head specimens and the average pullout force for each ranged from 563.32 ± 240.38 N to 1041.01 ± 346.84 N. The data exhibited significant variation within and between each specimen and no statistically significant relationships were determined between pullout force and patient age, weight, height, BMI, inorganic to organic matter ratio, and BMD. A new testing fixture was designed and manufactured to merge the clinical and research environments by enabling the physician to extract the K-wire from each bone specimen himself. The new device allows the physician to gather tactile feedback on the relative ease of extraction while load history is recorded similar to the previous procedure for data acquisition. Future work will include testing human bones with the new device to further investigate correlations for predicting bone quality.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Anaerobic digestion of food scraps has the potential to accomplish waste minimization, energy production, and compost or humus production. At Bucknell University, removal of food scraps from the waste stream could reduce municipal solid waste transportation costs and landfill tipping fees, and provide methane and humus for use on campus. To determine the suitability of food waste produced at Bucknell for high-solids anaerobic digestion (HSAD), a year-long characterization study was conducted. Physical and chemical properties, waste biodegradability, and annual production of biodegradable waste were assessed. Bucknell University food and landscape waste was digested at pilot-scale for over a year to test performance at low and high loading rates, ease of operation at 20% solids, benefits of codigestion of food and landscape waste, and toprovide digestate for studies to assess the curing needs of HSAD digestate. A laboratory-scale curing study was conducted to assess the curing duration required to reduce microbial activity, phytotoxicity, and odors to acceptable levels for subsequent use ofhumus. The characteristics of Bucknell University food and landscape waste were tested approximately weekly for one year, to determine chemical oxygen demand (COD), total solids (TS), volatile solids (VS), and biodegradability (from batch digestion studies). Fats, oil, and grease and total Kjeldahl nitrogen were also tested for some food waste samples. Based on the characterization and biodegradability studies, Bucknell University dining hall food waste is a good candidate for HSAD. During batch digestion studies Bucknell University food waste produced a mean of 288 mL CH4/g COD with a 95%confidence interval of 0.06 mL CH4/g COD. The addition of landscape waste for digestion increased methane production from both food and landscape waste; however, because the landscape waste biodegradability was extremely low the increase was small.Based on an informal waste audit, Bucknell could collect up to 100 tons of food waste from dining facilities each year. The pilot-scale high-solids anaerobic digestion study confirmed that digestion ofBucknell University food waste combined with landscape waste at a low organic loading rate (OLR) of 2 g COD/L reactor volume-day is feasible. During low OLR operation, stable reactor performance was demonstrated through monitoring of biogas production and composition, reactor total and volatile solids, total and soluble chemical oxygendemand, volatile fatty acid content, pH, and bicarbonate alkalinity. Low OLR HSAD of Bucknell University food waste and landscape waste combined produced 232 L CH4/kg COD and 229 L CH4/kg VS. When OLR was increased to high loading (15 g COD/L reactor volume-day) to assess maximum loading conditions, reactor performance became unstable due to ammonia accumulation and subsequent inhibition. The methaneproduction per unit COD also decreased (to 211 L CH4/kg COD fed), although methane production per unit VS increased (to 272 L CH4/kg VS fed). The degree of ammonia inhibition was investigated through respirometry in which reactor digestate was diluted and exposed to varying concentrations of ammonia. Treatments with low ammoniaconcentrations recovered quickly from ammonia inhibition within the reactor. The post-digestion curing process was studied at laboratory-scale, to provide a preliminary assessment of curing duration. Digestate was mixed with woodchips and incubated in an insulated container at 35 °C to simulate full-scale curing self-heatingconditions. Degree of digestate stabilization was determined through oxygen uptake rates, percent O2, temperature, volatile solids, and Solvita Maturity Index. Phytotoxicity was determined through observation of volatile fatty acid and ammonia concentrations.Stabilization of organics and elimination of phytotoxic compounds (after 10–15 days of curing) preceded significant reductions of volatile sulfur compounds (hydrogen sulfide, methanethiol, and dimethyl sulfide) after 15–20 days of curing. Bucknell University food waste has high biodegradability and is suitable for high-solids anaerobic digestion; however, it has a low C:N ratio which can result in ammonia accumulation under some operating conditions. The low biodegradability of Bucknell University landscape waste limits the amount of bioavailable carbon that it can contribute, making it unsuitable for use as a cosubstrate to increase the C:N ratio of food waste. Additional research is indicated to determine other cosubstrates with higher biodegradabilities that may allow successful HSAD of Bucknell University food waste at high OLRs. Some cosubstrates to investigate are office paper, field residues, or grease trap waste. A brief curing period of less than 3 weeks was sufficient to produce viable humus from digestate produced by low OLR HSAD of food and landscape waste.