138 resultados para timetabling,orario delle lezioni, interfaccia web,ottimizzaizone,GUI
Resumo:
BACKGROUND: Web-based programs are a potential medium for supporting weight loss because of their accessibility and wide reach. Research is warranted to determine the shorter- and longer-term effects of these programs in relation to weight loss and other health outcomes.
OBJECTIVE: The aim was to evaluate the effects of a Web-based component of a weight loss service (Imperative Health) in an overweight/obese population at risk of cardiovascular disease (CVD) using a randomized controlled design and a true control group.
METHODS: A total of 65 overweight/obese adults at high risk of CVD were randomly allocated to 1 of 2 groups. Group 1 (n=32) was provided with the Web-based program, which supported positive dietary and physical activity changes and assisted in managing weight. Group 2 continued with their usual self-care (n=33). Assessments were conducted face-to-face. The primary outcome was between-group change in weight at 3 months. Secondary outcomes included between-group change in anthropometric measurements, blood pressure, lipid measurements, physical activity, and energy intake at 3, 6, and 12 months. Interviews were conducted to explore participants' views of the Web-based program.
RESULTS: Retention rates for the intervention and control groups at 3 months were 78% (25/32) vs 97% (32/33), at 6 months were 66% (21/32) vs 94% (31/33), and at 12 months were 53% (17/32) vs 88% (29/33). Intention-to-treat analysis, using baseline observation carried forward imputation method, revealed that the intervention group lost more weight relative to the control group at 3 months (mean -3.41, 95% CI -4.70 to -2.13 kg vs mean -0.52, 95% CI -1.55 to 0.52 kg, P<.001), at 6 months (mean -3.47, 95% CI -4.95 to -1.98 kg vs mean -0.81, 95% CI -2.23 to 0.61 kg, P=.02), but not at 12 months (mean -2.38, 95% CI -3.48 to -0.97 kg vs mean -1.80, 95% CI -3.15 to -0.44 kg, P=.77). More intervention group participants lost ≥5% of their baseline body weight at 3 months (34%, 11/32 vs 3%, 1/33, P<.001) and 6 months (41%, 13/32 vs 18%, 6/33, P=.047), but not at 12 months (22%, 7/32 vs 21%, 7/33, P=.95) versus control group. The intervention group showed improvements in total cholesterol, triglycerides, and adopted more positive dietary and physical activity behaviors for up to 3 months verus control; however, these improvements were not sustained.
CONCLUSIONS: Although the intervention group had high attrition levels, this study provides evidence that this Web-based program can be used to initiate clinically relevant weight loss and lower CVD risk up to 3-6 months based on the proportion of intervention group participants losing ≥5% of their body weight versus control group. It also highlights a need for augmenting Web-based programs with further interventions, such as in-person support to enhance engagement and maintain these changes.
Resumo:
An orchestration is a multi-threaded computation that invokes a number of remote services. In practice, the responsiveness of a web-service fluctuates with demand; during surges in activity service responsiveness may be degraded, perhaps even to the point of failure. An uncertainty profile formalizes a user's perception of the effects of stress on an orchestration of web-services; it describes a strategic situation, modelled by a zero-sum angel–daemon game. Stressed web-service scenarios are analysed, using game theory, in a realistic way, lying between over-optimism (services are entirely reliable) and over-pessimism (all services are broken). The ‘resilience’ of an uncertainty profile can be assessed using the valuation of its associated zero-sum game. In order to demonstrate the validity of the approach, we consider two measures of resilience and a number of different stress models. It is shown how (i) uncertainty profiles can be ordered by risk (as measured by game valuations) and (ii) the structural properties of risk partial orders can be analysed.
Resumo:
Globally lakes bury and remineralise significant quantities of terrestrial C, and the associated flux of terrestrial C strongly influences their functioning. Changing deposition chemistry, land use and climate induced impacts on hydrology will affect soil biogeochemistry and terrestrial C export1 and hence lake ecology with potential feedbacks for regional and global C cycling. C and nitrogen stable isotope analysis (SIA) has identified the terrestrial subsidy of freshwater food webs. The approach relies on different 13C fractionation in aquatic and terrestrial primary producers, but also that inorganic C demands of aquatic primary producers are partly met by 13C depleted C from respiration of terrestrial C, and ‘old’ C derived from weathering of catchment geology. SIA thus fails to differentiate between the contributions of old and recently fixed terrestrial C. Natural abundance 14C can be used as an additional biomarker to untangle riverine food webs2 where aquatic and terrestrial δ 13C overlap, but may also be valuable for examining the age and origin of C in the lake. Primary production in lakes is based on dissolved inorganic C (DIC). DIC in alkaline lakes is partially derived from weathering of carbonaceous bedrock, a proportion of which is14C-free. The low 14C activity yields an artificial age offset leading samples to appear hundreds to thousands of years older than their actual age. As such, 14C can be used to identify the proportion of autochthonous C in the food-web. With terrestrial C inputs likely to increase, the origin and utilisation of ‘fossil’ or ‘recent’ allochthonous C in the food-web can also be determined. Stable isotopes and 14C were measured for biota, particulate organic matter (POM), DIC and dissolved organic carbon (DOC) from Lough Erne, Northern Ireland, a humic alkaline lake. Temporal and spatial variation was evident in DIC, DOC and POM C isotopes with implications for the fluctuation in terrestrial export processes. Ramped pyrolysis of lake surface sediment indicates the burial of two C components. 14C activity (507 ± 30 BP) of sediment combusted at 400˚C was consistent with algal values and younger than bulk sediment values (1097 ± 30 BP). The sample was subsequently combusted at 850˚C, yielding 14C values (1471 ± 30 BP) older than the bulk sediment age, suggesting that fossil terrestrial carbon is also buried in the sediment. Stable isotopes in the food web indicate that terrestrial organic C is also utilised by lake organisms. High winter δ 15N values in calanoid zooplankton (δ 15N = 24%¸) relative to phytoplankton and POM (δ 15N = 6h and 12h respectively) may reflect several microbial trophic levels between terrestrial C and calanoids. Furthermore winter calanoid 14C ages are consistent with DOC from an inflowing river (75 ± 24 BP), not phytoplankton (367 ± 70 BP). Summer calanoid δ 13C, δ 15N and 14C (345 ± 80 BP) indicate greater reliance on phytoplankton.
1 Monteith, D.T et al., (2007) Dissolved organic carbon trends resulting from changes in atmospheric deposition chemistry. Nature, 450:537-535
2 Caraco, N., et al.,(2010) Millennial-aged organic carbon subsidies to a modern river food web. Ecology,91: 2385-2393.
Resumo:
In this study, we investigate an adaptive decomposition and ordering strategy that automatically divides examinations into difficult and easy sets for constructing an examination timetable. The examinations in the difficult set are considered to be hard to place and hence are listed before the ones in the easy set in the construction process. Moreover, the examinations within each set are ordered using different strategies based on graph colouring heuristics. Initially, the examinations are placed into the easy set. During the construction process, examinations that cannot be scheduled are identified as the ones causing infeasibility and are moved forward in the difficult set to ensure earlier assignment in subsequent attempts. On the other hand, the examinations that can be scheduled remain in the easy set.
Within the easy set, a new subset called the boundary set is introduced to accommodate shuffling strategies to change the given ordering of examinations. The proposed approach, which incorporates different ordering and shuffling strategies, is explored on the Carter benchmark problems. The empirical results show that the performance of our algorithm is broadly comparable to existing constructive approaches.
Resumo:
The continued use of traditional lecturing across Higher Education as the main teaching and learning approach in many disciplines must be challenged. An increasing number of studies suggest that this approach, compared to more active learning methods, is the least effective. In counterargument, the use of traditional lectures are often justified as necessary given a large student population. By analysing the implementation of a web based broadcasting approach which replaced the traditional lecture within a programming-based module, and thereby removed the student population rationale, it was hoped that the student learning experience would become more active and ultimately enhance learning on the module. The implemented model replaces the traditional approach of students attending an on-campus lecture theatre with a web-based live broadcast approach that focuses on students being active learners rather than passive recipients. Students ‘attend’ by viewing a live broadcast of the lecturer, presented as a talking head, and the lecturer’s desktop, via a web browser. Video and audio communication is primarily from tutor to students, with text-based comments used to provide communication from students to tutor. This approach promotes active learning by allowing student to perform activities on their own computer rather than the passive viewing and listening common encountered in large lecture classes. By analysing this approach over two years (n = 234 students) results indicate that 89.6% of students rated the approach as offering a highly positive learning experience. Comparing student performance across three academic years also indicates a positive change. A small data analytic analysis was conducted into student participation levels and suggests that the student cohort's willingness to engage with the broadcast lectures material is high.
Resumo:
We consider the problem of linking web search queries to entities from a knowledge base such as Wikipedia. Such linking enables converting a user’s web search session to a footprint in the knowledge base that could be used to enrich the user profile. Traditional methods for entity linking have been directed towards finding entity mentions in text documents such as news reports, each of which are possibly linked to multiple entities enabling the usage of measures like entity set coherence. Since web search queries are very small text fragments, such criteria that rely on existence of a multitude of mentions do not work too well on them. We propose a three-phase method for linking web search queries to wikipedia entities. The first phase does IR-style scoring of entities against the search query to narrow down to a subset of entities that are expanded using hyperlink information in the second phase to a larger set. Lastly, we use a graph traversal approach to identify the top entities to link the query to. Through an empirical evaluation on real-world web search queries, we illustrate that our methods significantly enhance the linking accuracy over state-of-the-art methods.
Resumo:
This paper is concerned with the application of an automated hybrid approach in addressing the university timetabling problem. The approach described is based on the nature-inspired artificial bee colony (ABC) algorithm. An ABC algorithm is a biologically-inspired optimization approach, which has been widely implemented in solving a range of optimization problems in recent years such as job shop scheduling and machine timetabling problems. Although the approach has proven to be robust across a range of problems, it is acknowledged within the literature that there currently exist a number of inefficiencies regarding the exploration and exploitation abilities. These inefficiencies can often lead to a slow convergence speed within the search process. Hence, this paper introduces a variant of the algorithm which utilizes a global best model inspired from particle swarm optimization to enhance the global exploration ability while hybridizing with the great deluge (GD) algorithm in order to improve the local exploitation ability. Using this approach, an effective balance between exploration and exploitation is attained. In addition, a traditional local search approach is incorporated within the GD algorithm with the aim of further enhancing the performance of the overall hybrid method. To evaluate the performance of the proposed approach, two diverse university timetabling datasets are investigated, i.e., Carter's examination timetabling and Socha course timetabling datasets. It should be noted that both problems have differing complexity and different solution landscapes. Experimental results demonstrate that the proposed method is capable of producing high quality solutions across both these benchmark problems, showing a good degree of generality in the approach. Moreover, the proposed method produces best results on some instances as compared with other approaches presented in the literature.
Resumo:
Free-roaming dogs (FRD) represent a potential threat to the quality of life in cities from an ecological, social and public health point of view. One of the most urgent concerns is the role of uncontrolled dogs as reservoirs of infectious diseases transmittable to humans and, above all, rabies. An estimate of the FRD population size and characteristics in a given area is the first step for any relevant intervention programme. Direct count methods are still prominent because of their non-invasive approach, information technologies can support such methods facilitating data collection and allowing for a more efficient data handling. This paper presents a new framework for data collection using a topological algorithm implemented as ArcScript in ESRI® ArcGIS software, which allows for a random selection of the sampling areas. It also supplies a mobile phone application for Android® operating system devices which integrates Global Positioning System (GPS) and Google Maps™. The potential of such a framework was tested in 2 Italian regions. Coupling technological and innovative solutions associated with common counting methods facilitate data collection and transcription. It also paves the way to future applications, which could support dog population management systems.
Resumo:
The risks associated with zoonotic infections transmitted by companion animals are a serious public health concern: the control of zoonoses incidence in domestic dogs, both owned and stray, is hence important to protect human health. Integrated dog population management (DPM) programs, based on the availability of information systems providing reliable data on the structure and composition of the existing dog population in a given area, are fundamental for making realistic plans for any disease surveillance and action system. Traceability systems, based on the compulsory electronic identification of dogs and their registration in a computerised database, are one of the most effective ways to ensure the usefulness of DPM programs. Even if this approach provides many advantages, several areas of improvement have emerged in countries where it has been applied. In Italy, every region hosts its own dog register but these are not compatible with one another. This paper shows the advantages of a web-based-application to improve data management of dog regional registers. The approach used for building this system was inspired by farm animal traceability schemes and it relies on a network of services that allows multi-channel access by different devices and data exchange via the web with other existing applications, without changing the pre-existing platforms. Today the system manages a database for over 300,000 dogs registered in three different Italian regions. By integrating multiple Web Services, this approach could be the solution to gather data at national and international levels at reasonable cost and creating a traceability system on a large scale and across borders that can be used for disease surveillance and development of population management plans. © 2012 Elsevier B.V.
Resumo:
Generating timetables for an institution is a challenging and time consuming task due to different demands on the overall structure of the timetable. In this paper, a new hybrid method which is a combination of a great deluge and artificial bee colony algorithm (INMGD-ABC) is proposed to address the university timetabling problem. Artificial bee colony algorithm (ABC) is a population based method that has been introduced in recent years and has proven successful in solving various optimization problems effectively. However, as with many search based approaches, there exist weaknesses in the exploration and exploitation abilities which tend to induce slow convergence of the overall search process. Therefore, hybridization is proposed to compensate for the identified weaknesses of the ABC. Also, inspired from imperialist competitive algorithms, an assimilation policy is implemented in order to improve the global exploration ability of the ABC algorithm. In addition, Nelder–Mead simplex search method is incorporated within the great deluge algorithm (NMGD) with the aim of enhancing the exploitation ability of the hybrid method in fine-tuning the problem search region. The proposed method is tested on two differing benchmark datasets i.e. examination and course timetabling datasets. A statistical analysis t-test has been conducted and shows the performance of the proposed approach as significantly better than basic ABC algorithm. Finally, the experimental results are compared against state-of-the art methods in the literature, with results obtained that are competitive and in certain cases achieving some of the current best results to those in the literature.