873 resultados para terms of service
Resumo:
Advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing workload conditions, such as number of connected users, application performance might suffer, leading to violations of Service Level Agreements (SLA) and possible inefficient use of hardware resources. Combining dynamic application requirements with the increased use of virtualised computing resources creates a challenging resource Management context for application and cloud-infrastructure owners. In such complex environments, business entities use SLAs as a means for specifying quantitative and qualitative requirements of services. There are several challenges in running distributed enterprise applications in cloud environments, ranging from the instantiation of service VMs in the correct order using an adequate quantity of computing resources, to adapting the number of running services in response to varying external loads, such as number of users. The application owner is interested in finding the optimum amount of computing and network resources to use for ensuring that the performance requirements of all her/his applications are met. She/he is also interested in appropriately scaling the distributed services so that application performance guarantees are maintained even under dynamic workload conditions. Similarly, the infrastructure Providers are interested in optimally provisioning the virtual resources onto the available physical infrastructure so that her/his operational costs are minimized, while maximizing the performance of tenants’ applications. Motivated by the complexities associated with the management and scaling of distributed applications, while satisfying multiple objectives (related to both consumers and providers of cloud resources), this thesis proposes a cloud resource management platform able to dynamically provision and coordinate the various lifecycle actions on both virtual and physical cloud resources using semantically enriched SLAs. The system focuses on dynamic sizing (scaling) of virtual infrastructures composed of virtual machines (VM) bounded application services. We describe several algorithms for adapting the number of VMs allocated to the distributed application in response to changing workload conditions, based on SLA-defined performance guarantees. We also present a framework for dynamic composition of scaling rules for distributed service, which used benchmark-generated application Monitoring traces. We show how these scaling rules can be combined and included into semantic SLAs for controlling allocation of services. We also provide a detailed description of the multi-objective infrastructure resource allocation problem and various approaches to satisfying this problem. We present a resource management system based on a genetic algorithm, which performs allocation of virtual resources, while considering the optimization of multiple criteria. We prove that our approach significantly outperforms reactive VM-scaling algorithms as well as heuristic-based VM-allocation approaches.
Resumo:
A life table methodology was developed which estimates the expected remaining Army service time and the expected remaining Army sick time by years of service for the United States Army population. A measure of illness impact was defined as the ratio of expected remaining Army sick time to the expected remaining Army service time. The variances of the resulting estimators were developed on the basis of current data. The theory of partial and complete competing risks was considered for each type of decrement (death, administrative separation, and medical separation) and for the causes of sick time.^ The methodology was applied to world-wide U.S. Army data for calendar year 1978. A total of 669,493 enlisted personnel and 97,704 officers were reported on active duty as of 30 September 1978. During calendar year 1978, the Army Medical Department reported 114,647 inpatient discharges and 1,767,146 sick days. Although the methodology is completely general with respect to the definition of sick time, only sick time associated with an inpatient episode was considered in this study.^ Since the temporal measure was years of Army service, an age-adjusting process was applied to the life tables for comparative purposes. Analyses were conducted by rank (enlisted and officer), race and sex, and were based on the ratio of expected remaining Army sick time to expected remaining Army service time. Seventeen major diagnostic groups, classified by the Eighth Revision, International Classification of Diseases, Adapted for Use In The United States, were ranked according to their cumulative (across years of service) contribution to expected remaining sick time.^ The study results indicated that enlisted personnel tend to have more expected hospital-associated sick time relative to their expected Army service time than officers. Non-white officers generally have more expected sick time relative to their expected Army service time than white officers. This racial differential was not supported within the enlisted population. Females tend to have more expected sick time relative to their expected Army service time than males. This tendency remained after diagnostic groups 580-629 (Genitourinary System) and 630-678 (Pregnancy and Childbirth) were removed. Problems associated with the circulatory system, digestive system and musculoskeletal system were among the three leading causes of cumulative sick time across years of service. ^
Resumo:
Rational health services planning requires an examination of the effects of various factors on the health status of a population within a given set of socioeconomic circumstances. The commonly accepted explanations for improved health in the less developed countries (LDCs) are: Health Service Resources available to a population, Environmental and Life conditions, and the Econosociocultural Characteristics of the population.^ In the context of the low economic base from which many LDCs initiate development activities, a strong imperative exists for identifying in which of these major areas public health policy would be most effective in terms of improving health. A new conceptual model is proposed that would be used for future policy analyses to assess what changes in health status of populations in LDCs can be expected as direct functions of increased health service resources, and of improved environmental and econosociocultural conditions.^ While direct policy analysis is ill-advised at this time due to data inadequacy, the model is illustrated using data presently available for twenty-five relatively homogeneous Sub-Sahara African countries. Within the limitations of available data, study findings indicate that while econosociocultural conditions were the most important explanatory factors of the three major independent variables in 1970, health service resources became the most important in 1975. Study findings are inconclusive at this time with regards to the relative contributions of physicians and medical assistants in explaining variances in mortality in these countries.^ Because of the deficient nature of available data, study findings should be interpreted very cautiously. Tests of statistical significance of study findings were by-passed because of their situational technical inappropriateness. This study is significant in being the first of its kind and scope to focus on the Sub-Sahara African region of the World Health Organization, using the Wroclaw Taxonomic Method in conjunction with a stepwise regression technique. It is desirable, therefore, to examine the observed magnitude and directional consistency of all hypothesized relationships, even if evidence is inconclusive. ^
Resumo:
This paper reports a cost-effectiveness analysis of standard therapeutic interventions received by ambulatory dually diagnosed clients of a Community Mental Health Center (CMHC). For the purposes of this study dually diagnosed was defined as a DSM-III-R or IV diagnosis of a major mental disorder and a concomitant substance abuse disorder. The prevalence of dually diagnosed people among the mentally ill and their unique and problematic nature continues to challenge and encumber CMHCs and poses grave public health risks. An absence of research on these clients in community-based settings and the cost-effectiveness of their standard CMHC care has hindered the development of effective community-based intervention strategies. This exploratory and descriptive effort is a first step toward providing information on which to base programmatic management decisions.^ Data for this study were derived from electronic client records of a CMHC located in a large Southwestern, Sun-belt metropolitan area. A total of 220 records were collected on clients consecutively admitted during a two-and-one-half year period. Information was gathered profiling the clients' background characteristics, receipt of standard services and treatments, costs of the care they received, and length of CMHC enrollment and subsequent psychiatric hospitalizations. The services and treatments were compared with regard to their costs and predicted contributions toward maintaining clients in the community and out of public psychiatric hospitals.^ This study investigated: (1) the study groups' background, mental illness, and substance abuse characteristics; (2) types, extent, and patterns of their receipt of standard services and treatments; (3) associations between the receipt of services and treatments, community tenure, and risk of psychiatric hospitalization; and, (4) comparisons of average costs for services and treatments in terms of their contributions toward maintaining the clients in the community.^ The results suggest that substance abuse and other lifestyle factors were related to the dually diagnosed clients' admissions to the CMHC. The dually diagnosed clients' receipt of care was associated strongly with their insurability and global functioning. Medication Services were the most expensive yet effective service or treatment. Supported Education was the third most expensive and second most effective. Psychosocial Services, the second most expensive, were only effective in terms of maintaining clients in the community. Group Counseling, the fourth most expensive, had no effect on community maintenance and increased the risk of hospitalization when accompanied by Medication Services. Individual Counseling, the least expensive, had no effect on community maintenance. But it reduced the risk of hospitalization when accompanied by Medication Services. Networking/Referral, the fifth most expensive service or treatment, was ineffective.^ The study compared the results with findings in the literature. Implications are discussed regarding further research, study limitations, practical applications and benefits, and improvements to theoretical understandings, in particular, concepts underscoring Managed Care. ^
Resumo:
Retomando los problemas contemporáneos del trabajo en términos de sus actores, se examinan los aspectos en los cuales la psicología de trabajo puede producir contribuciones y reflexiones. El problema del diagnóstico así como el de la comprensión y la elucidación de los límites y alcance que las políticas de empleo implican diseños diferentes para procesos ligados a los aspectos psicosociales del trabajo, que exceden a los determinados desde el punto de vista del empleo decente.
Resumo:
Retomando los problemas contemporáneos del trabajo en términos de sus actores, se examinan los aspectos en los cuales la psicología de trabajo puede producir contribuciones y reflexiones. El problema del diagnóstico así como el de la comprensión y la elucidación de los límites y alcance que las políticas de empleo implican diseños diferentes para procesos ligados a los aspectos psicosociales del trabajo, que exceden a los determinados desde el punto de vista del empleo decente.
Resumo:
Retomando los problemas contemporáneos del trabajo en términos de sus actores, se examinan los aspectos en los cuales la psicología de trabajo puede producir contribuciones y reflexiones. El problema del diagnóstico así como el de la comprensión y la elucidación de los límites y alcance que las políticas de empleo implican diseños diferentes para procesos ligados a los aspectos psicosociales del trabajo, que exceden a los determinados desde el punto de vista del empleo decente.
Resumo:
Stubacher Sonnblickkees (SSK) is located in the Hohe Tauern Range (Eastern Alps) in the south of Salzburg Province (Austria) in the region of Oberpinzgau in the upper Stubach Valley. The glacier is situated at the main Alpine crest and faces east, starting at elevations close to 3050 m and in the 1980s terminated at 2500 m a.s.l. It had an area of 1.7 km² at that time, compared with 1 km² in 2013. The glacier type can be classified as a slope glacier, i.e. the relief is covered by a relatively thin ice sheet and there is no regular glacier tongue. The rough subglacial topography makes for a complex shape in the surface topography, with various concave and convex patterns. The main reason for selecting this glacier for mass balance observations (as early as 1963) was to verify on a complex glacier how the mass balance methods and the conclusions - derived during the more or less pioneer phase of glaciological investigations in the 1950s and 1960s - could be applied to the SSK glacier. The decision was influenced by the fact that close to the SSK there was the Rudolfshütte, a hostel of the Austrian Alpine Club (OeAV), newly constructed in the 1950s to replace the old hut dating from 1874. The new Alpenhotel Rudolfshütte, which was run by the Slupetzky family from 1958 to 1970, was the base station for the long-term observation; the cable car to Rudolfshütte, operated by the Austrian Federal Railways (ÖBB), was a logistic advantage. Another factor for choosing SSK as a glaciological research site was the availability of discharge records of the catchment area from the Austrian Federal Railways who had turned the nearby lake Weißsee ('White Lake') - a former natural lake - into a reservoir for their hydroelectric power plants. In terms of regional climatic differences between the Central Alps in Tyrol and those of the Hohe Tauern, the latter experienced significantly higher precipitation , so one could expect new insights in the different response of the two glaciers SSK and Hintereisferner (Ötztal Alps) - where a mass balance series went back to 1952. In 1966 another mass balance series with an additional focus on runoff recordings was initiated at Vernagtfener, near Hintereisferner, by the Commission of the Bavarian Academy of Sciences in Munich. The usual and necessary link to climate and climate change was given by a newly founded weather station (by Heinz and Werner Slupetzky) at the Rudolfshütte in 1961, which ran until 1967. Along with an extension and enlargement to the so-called Alpine Center Rudolfshütte of the OeAV, a climate observatory (suggested by Heinz Slupetzky) has been operating without interruption since 1980 under the responsibility of ZAMG and the Hydrological Service of Salzburg, providing long-term met observations. The weather station is supported by the Berghotel Rudolfshütte (in 2004 the OeAV sold the hotel to a private owner) with accommodation and facilities. Direct yearly mass balance measurements were started in 1963, first for 3 years as part of a thesis project. In 1965 the project was incorporated into the Austrian glacier measurement sites within the International Hydrological Decade (IHD) 1965 - 1974 and was afterwards extended via the International Hydrological Program (IHP) 1975 - 1981. During both periods the main financial support came from the Hydrological Survey of Austria. After 1981 funds were provided by the Hydrological Service of the Federal Government of Salzburg. The research was conducted from 1965 onwards by Heinz Slupetzky from the (former) Department of Geography of the University of Salzburg. These activities received better recognition when the High Alpine Research Station of the University of Salzburg was founded in 1982 and brought in additional funding from the University. With recent changes concerning Rudolfshütte, however, it became unfeasible to keep the research station going. Fortunately, at least the weather station at Rudolfshütte is still operating. In the pioneer years of the mass balance recordings at SSK, the main goal was to understand the influence of the complicated topography on the ablation and accumulation processes. With frequent strong southerly winds (foehn) on the one hand, and precipitation coming in with storms from the north to northwest, the snow drift is an important factor on the undulating glacier surface. This results in less snow cover in convex zones and in more or a maximum accumulation in concave or flat areas. As a consequence of the accentuated topography, certain characteristic ablation and accumulation patterns can be observed during the summer season every year, which have been regularly observed for many decades . The process of snow depletion (Ausaperung) runs through a series of stages (described by the AAR) every year. The sequence of stages until the end of the ablation season depends on the weather conditions in a balance year. One needs a strong negative mass balance year at the beginning of glacier measurements to find out the regularities; 1965, the second year of observation resulted in a very positive mass balance with very little ablation but heavy accumulation. To date it is the year with the absolute maximum positive balance in the entire mass balance series since 1959, probably since 1950. The highly complex ablation patterns required a high number of ablation stakes at the beginning of the research and it took several years to develop a clearer idea of the necessary density of measurement points to ensure high accuracy. A great number of snow pits and probing profiles (and additional measurements at crevasses) were necessary to map the accumulation area/patterns. Mapping the snow depletion, especially at the end of the ablation season, which coincides with the equilibrium line, is one of the main basic data for drawing contour lines of mass balance and to calculate the total mass balance (on a regular-shaped valley glacier there might be an equilibrium line following a contour line of elevation separating the accumulation area and the ablation area, but not at SSK). - An example: in 1969/70, 54 ablation stakes and 22 snow pits were used on the 1.77 km² glacier surface. In the course of the study the consistency of the accumulation and ablation patterns could be used to reduce the number of measurement points. - At the SSK the stratigraphic system, i.e. the natural balance year, is used instead the usual hydrological year. From 1964 to 1981, the yearly mass balance was calculated by direct measurements. Based on these records of 17 years, a regression analysis between the specific net mass balance and the ratio of ablation area to total area (AAR) has been used since then. The basic requirement was mapping the maximum snow depletion at the end of each balance year. There was the advantage of Heinz Slupetzky's detailed local and long-term experience, which ensured homogeneity of the series on individual influences of the mass balance calculations. Verifications took place as often as possible by means of independent geodetic methods, i.e. monoplotting , aerial and terrestrial photogrammetry, more recently also the application of PHOTOMODELLER and laser scans. The semi-direct mass balance determinations used at SSK were tentatively compared with data from periods of mass/volume change, resulting in promising first results on the reliability of the method. In recent years re-analyses of the mass balance series have been conducted by the World Glacier Monitoring Service and will be done at SSK too. - The methods developed at SSK also add to another objective, much discussed in the 1960s within the community, namely to achieve time- and labour-saving methods to ensure continuation of long-term mass balance series. The regression relations were used to extrapolate the mass balance series back to 1959, the maximum depletion could be reconstructed by means of photographs for those years. R. Günther (1982) calculated the mass balance series of SSK back to 1950 by analysing the correlation between meteorological data and the mass balance; he found a high statistical relation between measured and determined mass balance figures for SSK. In spite of the complex glacier topography, interesting empirical experiences were gained from the mass balance data sets, giving a better understanding of the characteristics of the glacier type, mass balance and mass exchange. It turned out that there are distinct relations between the specific net balance, net accumulation (defined as Bc/S) and net ablation (Ba/S) to the AAR, resulting in characteristic so-called 'turnover curves'. The diagram of SSK represents the type of a glacier without a glacier tongue. Between 1964 and 1966, a basic method was developed, starting from the idea that instead of measuring years to cover the range between extreme positive and extreme negative yearly balances one could record the AAR/snow depletion/Ausaperung during one or two summers. The new method was applied on Cathedral Massif Glacier, a cirque glacier with the same area as the Stubacher Sonnblickkees, in British Columbia, Canada. during the summers of 1977 and 1978. It returned exactly the expected relations, e.g. mass turnover curves, as found on SSK. The SSK was mapped several times on a scale of 1:5000 to 1:10000. Length variations have been measured since 1960 within the OeAV glacier length measurement programme. Between 1965 and 1981, there was a mass gain of 10 million cubic metres. With a time lag of 10 years, this resulted in an advance until the mid-1980s. Since 1982 there has been a distinct mass loss of 35 million cubic metres by 2013. In recent years, the glacier has disintegrated faster, forced by the formation of a periglacial lake at the glacier terminus and also by the outcrops of rocks (typical for the slope glacier type), which have accelerated the meltdown. The formation of this lake is well documented. The glacier has retreated by some 600 m since 1981. - Since August 2002, a runoff gauge installed by the Hydrographical Service of Salzburg has recorded the discharge of the main part of SSK at the outlet of the new Unterer Eisboden See. The annual reports - submitted from 1982 on as a contractual obligation to the Hydrological Service of Salzburg - document the ongoing processes on the one hand, and emphasize the mass balance of SSK and outline the climatological reasons, mainly based on the met-data of the observatory Rudolfshütte, on the other. There is an additional focus on estimating the annual water balance in the catchment area of the lake. There are certain preconditions for the water balance equation in the area. Runoff is recorded by the ÖBB power stations, the mass balance of the now approx. 20% glaciated area (mainly the Sonnblickkees) is measured andthe change of the snow and firn patches/the water content is estimated as well as possible. (Nowadays laserscanning and ground radar are available to measure the snow pack). There is a net of three precipitation gauges plus the recordings at Rudolfshütte. The evaporation is of minor importance. The long-term annual mean runoff depth in the catchment area is around 3.000 mm/year. The precipitation gauges have measured deficits between 10% and 35%, on average probably 25% to 30%. That means that the real precipitation in the catchment area Weißsee (at elevations between 2,250 and 3,000 m) is in an order of 3,200 to 3,400 mm a year. The mass balance record of SSK was the first one established in the Hohe Tauern region (and now since the Hohe Tauern National Park was founded in 1983 in Salzburg) and is one of the longest measurement series worldwide. Great efforts are under way to continue the series, to safeguard against interruption and to guarantee a long-term monitoring of the mass balance and volume change of SSK (until the glacier is completely gone, which seems to be realistic in the near future as a result of the ongoing global warming). Heinz Slupetzky, March 2014
Resumo:
This paper assesses the technical efficiency and profitability of the knitwear industry in Bangladesh taking into account the sector’s role in poverty reduction. While stochastic frontier analysis was invoked to assess technical efficiency, three alternative measures, namely the rate of return, total factor productivity and the Solow residual, were used to gauge the extent and determinants of the profitability of the industry based on firm-level data collected in 2001. The estimation results indicate the high profitability of the knitwear firms. In Bangladesh, the dynamic development of the industry has entailed great diversity in efficiency in comparison with the garment industries of other developing countries. While there is a significant scale effect in profitability and productivity, no supporting evidence was found for the positive impact on competitiveness of industrial upgrading in terms of usage of expensive machinery and vertical integration and industrial agglomeration.
Resumo:
Introduction:Today, many countries, regardless of developed or developing, are trying to promote decentralization. According to Manor, as his quoting of Nickson’s argument, decentralization stems from the necessity to strengthen local governments as proxy of civil society to fill the yawning gap between the state and civil society (Manor [1999]: 30). With the end to the Cold War following the collapse of the Soviet Union rendering the cause of the “leadership of the central government to counter communism” meaningless, Manor points out, it has become increasingly difficult to respond flexibly to changes in society under the centralized system. Then, what benefits can be expected from the effectuation of decentralization? Litvack-Ahmad-Bird cited the four points: attainment of allocative efficiency in the face of different local preferences for local public goods; improvement to government competitiveness; realization of good governance; and enhancement of the legitimacy and sustainability of heterogeneous national states (Litvack, Ahmad & Bird [1998]: 5). They all contribute to reducing the economic and social costs of a central government unable to respond to changes in society and enhancing the efficiency of state administration through the delegation of authority to local governments. Why did Indonesia have a go at decentralization? As Maryanov recognizes, reasons for the implementation of decentralization in Indonesia have never been explicitly presented (Maryanov [1958]: 17). But there was strong momentum toward building a democratic state in Indonesia at the time of independence, and as indicated by provisions of Article 18 of the 1945 Constitution, there was the tendency in Indonesia from the beginning to debate decentralization in association with democratization. That said debate about democratization was fairly abstract and the main points are to ease the tensions, quiet the complaints, satisfy the political forces and thus stabilize the process of government (Maryanov [1958]: 26-27). What triggered decentralization in Indonesia in earnest, of course, was the collapse of the Soeharto regime in May 1998. The Soeharto regime, regarded as the epitome of the centralization of power, became incapable of effectively dealing with problems in administration of the state and development administration. Besides, the post-Soeharto era of “reform (reformasi)” demanded the complete wipeout of the Soeharto image. In contraposition to the centralization of power was decentralization. The Soeharto regime that ruled Indonesia for 32 years was established in 1966 under the banner of “anti-communism.” The end of the Cold War structure in the late 1980s undermined the legitimate reason the centralization of power to counter communism claimed by the Soeharto regime. The factor for decentralization cited by Manor is applicable here. Decentralization can be interpreted to mean not only the reversal of the centralized system of government due to its inability to respond to changes in society, as Manor points out, but also the participation of local governments in the process of the nation state building through the more positive transfer of power (democratic decentralization) and in the coordinated pursuit with the central government for a new shape of the state. However, it is also true that a variety of problems are gushing out in the process of implementing decentralization in Indonesia. This paper discusses the relationship between decentralization and the formation of the nation state with the awareness of the problems and issues described above. Section 1 retraces the history of decentralization by examining laws and regulations for local administration and how they were actually implemented or not. Section 2 focuses on the relationships among the central government, local governments, foreign companies and other actors in the play over the distribution of profits from exploitation of natural resources, and examines the process of the ulterior motives of these actors and the amplification of mistrust spawning intense conflicts that, in extreme cases, grew into separation and independence movements. Section 3 considers the merits and demerits at this stage of decentralization implemented since 2001 and shed light on the significance of decentralization in terms of the nation state building. Finally, Section 4 attempts to review decentralization as the “opportunity to learn by doing” for the central and local governments in the process of the nation state building. In the context of decentralization in Indonesia, deconcentration (dekonsentrasi), decentralization (desentralisasi) and support assignments (tugas pembantuan; medebewind, a Dutch word, was used previously) are defined as follows. Dekonsentrasi means that when the central government puts a local office of its own, or an outpost agency, in charge of implementing its service without delegating the administrative authority over this particular service. The outpost agency carries out the services as instructed by the central government. A head of a local government, when acting for the central government, gets involved in the process of dekonsentrasi. Desentralisasi, meanwhile, occurs when the central government cedes the administrative authority over a particular service to local governments. Under desentralisasi, local governments can undertake the particular service at their own discretion, and the central government, after the delegation of authority, cannot interfere with how local governments handle that service. Tugas pembantuan occur when the central government makes local governments or villages, or local governments make villages, undertake a particular service. In this case, the central government, or local governments, provides funding, equipment and materials necessary, and officials of local governments and villages undertake the service under the supervision and guidance of the central or local governments. Tugas pembantuan are maintained until local governments and villages become capable of undertaking that particular service on their own.
Resumo:
La relación entre la estructura urbana y la movilidad ha sido estudiada desde hace más de 70 años. El entorno urbano incluye múltiples dimensiones como por ejemplo: la estructura urbana, los usos de suelo, la distribución de instalaciones diversas (comercios, escuelas y zonas de restauración, parking, etc.). Al realizar una revisión de la literatura existente en este contexto, se encuentran distintos análisis, metodologías, escalas geográficas y dimensiones, tanto de la movilidad como de la estructura urbana. En este sentido, se trata de una relación muy estudiada pero muy compleja, sobre la que no existe hasta el momento un consenso sobre qué dimensión del entorno urbano influye sobre qué dimensión de la movilidad, y cuál es la manera apropiada de representar esta relación. Con el propósito de contestar estas preguntas investigación, la presente tesis tiene los siguientes objetivos generales: (1) Contribuir al mejor entendimiento de la compleja relación estructura urbana y movilidad. y (2) Entender el rol de los atributos latentes en la relación entorno urbano y movilidad. El objetivo específico de la tesis es analizar la influencia del entorno urbano sobre dos dimensiones de la movilidad: número de viajes y tipo de tour. Vista la complejidad de la relación entorno urbano y movilidad, se pretende contribuir al mejor entendimiento de la relación a través de la utilización de 3 escalas geográficas de las variables y del análisis de la influencia de efectos inobservados en la movilidad. Para el análisis se utiliza una base de datos conformada por tres tipos de datos: (1) Una encuesta de movilidad realizada durante los años 2006 y 2007. Se obtuvo un total de 943 encuestas, en 3 barrios de Madrid: Chamberí, Pozuelo y Algete. (2) Información municipal del Instituto Nacional de Estadística: dicha información se encuentra enlazada con los orígenes y destinos de los viajes recogidos en la encuesta. Y (3) Información georeferenciada en Arc-GIS de los hogares participantes en la encuesta: la base de datos contiene información respecto a la estructura de las calles, localización de escuelas, parking, centros médicos y lugares de restauración. Se analizó la correlación entre e intra-grupos y se modelizaron 4 casos de atributos bajo la estructura ordinal logit. Posteriormente se evalúa la auto-selección a través de la estimación conjunta de las elecciones de tipo de barrio y número de viajes. La elección del tipo de barrio consta de 3 alternativas: CBD, Urban y Suburban, según la zona de residencia recogida en las encuestas. Mientras que la elección del número de viajes consta de 4 categorías ordinales: 0 viajes, 1-2 viajes, 3-4 viajes y 5 o más viajes. A partir de la mejor especificación del modelo ordinal logit. Se desarrolló un modelo joint mixed-ordinal conjunto. Los resultados indican que las variables exógenas requieren un análisis exhaustivo de correlaciones con el fin de evitar resultados sesgados. ha determinado que es importante medir los atributos del BE donde se realiza el viaje, pero también la información municipal es muy explicativa de la movilidad individual. Por tanto, la percepción de las zonas de destino a nivel municipal es considerada importante. En el contexto de la Auto-selección (self-selection) es importante modelizar conjuntamente las decisiones. La Auto-selección existe, puesto que los parámetros estimados conjuntamente son significativos. Sin embargo, sólo ciertos atributos del entorno urbano son igualmente importantes sobre la elección de la zona de residencia y frecuencia de viajes. Para analizar la Propensión al Viaje, se desarrolló un modelo híbrido, formado por: una variable latente, un indicador y un modelo de elección discreta. La variable latente se denomina “Propensión al Viaje”, cuyo indicador en ecuación de medida es el número de viajes; la elección discreta es el tipo de tour. El modelo de elección consiste en 5 alternativas, según la jerarquía de actividades establecida en la tesis: HOME, no realiza viajes durante el día de estudio, HWH tour cuya actividad principal es el trabajo o estudios, y no se realizan paradas intermedias; HWHs tour si el individuo reaiza paradas intermedias; HOH tour cuya actividad principal es distinta a trabajo y estudios, y no se realizan paradas intermedias; HOHs donde se realizan paradas intermedias. Para llegar a la mejor especificación del modelo, se realizó un trabajo importante considerando diferentes estructuras de modelos y tres tipos de estimaciones. De tal manera, se obtuvieron parámetros consistentes y eficientes. Los resultados muestran que la modelización de los tours, representa una ventaja sobre la modelización de los viajes, puesto que supera las limitaciones de espacio y tiempo, enlazando los viajes realizados por la misma persona en el día de estudio. La propensión al viaje (PT) existe y es específica para cada tipo de tour. Los parámetros estimados en el modelo híbrido resultaron significativos y distintos para cada alternativa de tipo de tour. Por último, en la tesis se verifica que los modelos híbridos representan una mejora sobre los modelos tradicionales de elección discreta, dando como resultado parámetros consistentes y más robustos. En cuanto a políticas de transporte, se ha demostrado que los atributos del entorno urbano son más importantes que los LOS (Level of Service) en la generación de tours multi-etapas. la presente tesis representa el primer análisis empírico de la relación entre los tipos de tours y la propensión al viaje. El concepto Propensity to Travel ha sido desarrollado exclusivamente para la tesis. Igualmente, el desarrollo de un modelo conjunto RC-Number of trips basado en tres escalas de medida representa innovación en cuanto a la comparación de las escalas geográficas, que no había sido hecha en la modelización de la self-selection. The relationship between built environment (BE) and travel behaviour (TB) has been studied in a number of cases, using several methods - aggregate and disaggregate approaches - and different focuses – trip frequency, automobile use, and vehicle miles travelled and so on. Definitely, travel is generated by the need to undertake activities and obtain services, and there is a general consensus that urban components affect TB. However researches are still needed to better understand which components of the travel behaviour are affected most and by which of the urban components. In order to fill the gap in the research, the present dissertation faced two main objectives: (1) To contribute to the better understanding of the relationship between travel demand and urban environment. And (2) To develop an econometric model for estimating travel demand with urban environment attributes. With this purpose, the present thesis faced an exhaustive research and computation of land-use variables in order to find the best representation of BE for modelling trip frequency. In particular two empirical analyses are carried out: 1. Estimation of three dimensions of travel demand using dimensions of urban environment. We compare different travel dimensions and geographical scales, and we measure self-selection contribution following the joint models. 2. Develop a hybrid model, integrated latent variable and discrete choice model. The implementation of hybrid models is new in the analysis of land-use and travel behaviour. BE and TB explicitly interact and allow richness information about a specific individual decision process For all empirical analysis is used a data-base from a survey conducted in 2006 and 2007 in Madrid. Spatial attributes describing neighbourhood environment are derived from different data sources: National Institute of Statistics-INE (Administrative: municipality and district) and GIS (circular units). INE provides raw data for such spatial units as: municipality and district. The construction of census units is trivial as the census bureau provides tables that readily define districts and municipalities. The construction of circular units requires us to determine the radius and associate the spatial information to our households. The first empirical part analyzes trip frequency by applying an ordered logit model. In this part is studied the effect of socio-economic, transport and land use characteristics on two travel dimensions: trip frequency and type of tour. In particular the land use is defined in terms of type of neighbourhoods and types of dwellers. Three neighbourhood representations are explored, and described three for constructing neighbourhood attributes. In particular administrative units are examined to represent neighbourhood and circular – unit representation. Ordered logit models are applied, while ordinal logit models are well-known, an intensive work for constructing a spatial attributes was carried out. On the other hand, the second empirical analysis consists of the development of an innovative econometric model that considers a latent variable called “propensity to travel”, and choice model is the choice of type of tour. The first two specifications of ordinal models help to estimate this latent variable. The latent variable is unobserved but the manifestation is called “indicators”, then the probability of choosing an alternative of tour is conditional to the probability of latent variable and type of tour. Since latent variable is unknown we fit the integral over its distribution. Four “sets of best variables” are specified, following the specification obtained from the correlation analysis. The results evidence that the relative importance of SE variables versus BE variables depends on how BE variables are measured. We found that each of these three spatial scales has its intangible qualities and drawbacks. Spatial scales play an important role on predicting travel demand due to the variability in measures at trip origin/destinations within the same administrative unit (municipality, district and so on). Larger units will produce less variation in data; but it does not affect certain variables, such as public transport supply, that are more significant at municipality level. By contrast, land-use measures are more efficient at district level. Self-selection in this context, is weak. Thus, the influence of BE attributes is true. The results of the hybrid model show that unobserved factors affect the choice of tour complexity. The latent variable used in this model is propensity to travel that is explained by socioeconomic aspects and neighbourhood attributes. The results show that neighbourhood attributes have indeed a significant impact on the choice of the type of tours either directly and through the propensity to travel. The propensity to travel has a different impact depending on the structure of each tour and increases the probability of choosing more complex tours, such as tours with many intermediate stops. The integration of choice and latent variable model shows that omitting important perception and attitudes leads to inconsistent estimates. The results also indicate that goodness of fit improves by adding the latent variable in both sequential and simultaneous estimation. There are significant differences in the sensitivity to the latent variable across alternatives. In general, as expected, the hybrid models show a major improvement into the goodness of fit of the model, compared to a classical discrete choice model that does not incorporate latent effects. The integrated model leads to a more detailed analysis of the behavioural process. Summarizing, the effect that built environment characteristics on trip frequency studied is deeply analyzed. In particular we tried to better understand how land use characteristics can be defined and measured and which of these measures do have really an impact on trip frequency. We also tried to test the superiority of HCM on this field. We can concluded that HCM shows a major improvement into the goodness of fit of the model, compared to classical discrete choice model that does not incorporate latent effects. And consequently, the application of HCM shows the importance of LV on the decision of tour complexity. People are more elastic to built environment attributes than level of services. Thus, policy implications must take place to develop more mixed areas, work-places in combination with commercial retails.
Resumo:
Embedded context management in resource-constrained devices (e.g. mobile phones, autonomous sensors or smart objects) imposes special requirements in terms of lightness for data modelling and reasoning. In this paper, we explore the state-of-the-art on data representation and reasoning tools for embedded mobile reasoning and propose a light inference system (LIS) aiming at simplifying embedded inference processes offering a set of functionalities to avoid redundancy in context management operations. The system is part of a service-oriented mobile software framework, conceived to facilitate the creation of context-aware applications—it decouples sensor data acquisition and context processing from the application logic. LIS, composed of several modules, encapsulates existing lightweight tools for ontology data management and rule-based reasoning, and it is ready to run on Java-enabled handheld devices. Data management and reasoning processes are designed to handle a general ontology that enables communication among framework components. Both the applications running on top of the framework and the framework components themselves can configure the rule and query sets in order to retrieve the information they need from LIS. In order to test LIS features in a real application scenario, an ‘Activity Monitor’ has been designed and implemented: a personal health-persuasive application that provides feedback on the user’s lifestyle, combining data from physical and virtual sensors. In this case of use, LIS is used to timely evaluate the user’s activity level, to decide on the convenience of triggering notifications and to determine the best interface or channel to deliver these context-aware alerts.d
Resumo:
Virtualized Infrastructures are a promising way for providing flexible and dynamic computing solutions for resourceconsuming tasks. Scientific Workflows are one of these kind of tasks, as they need a large amount of computational resources during certain periods of time. To provide the best infrastructure configuration for a workflow it is necessary to explore as many providers as possible taking into account different criteria like Quality of Service, pricing, response time, network latency, etc. Moreover, each one of these new resources must be tuned to provide the tools and dependencies required by each of the steps of the workflow. Working with different infrastructure providers, either public or private using their own concepts and terms, and with a set of heterogeneous applications requires a framework for integrating all the information about these elements. This work proposes semantic technologies for describing and integrating all the information about the different components of the overall system and a set of policies created by the user. Based on this information a scheduling process will be performed to generate an infrastructure configuration defining the set of virtual machines that must be run and the tools that must be deployed on them.
Resumo:
Service compositions put together loosely-coupled component services to perform more complex, higher level, or cross-organizational tasks in a platform-independent manner. Quality-of-Service (QoS) properties, such as execution time, availability, or cost, are critical for their usability, and permissible boundaries for their values are defined in Service Level Agreements (SLAs). We propose a method whereby constraints that model SLA conformance and violation are derived at any given point of the execution of a service composition. These constraints are generated using the structure of the composition and properties of the component services, which can be either known or empirically measured. Violation of these constraints means that the corresponding scenario is unfeasible, while satisfaction gives values for the constrained variables (start / end times for activities, or number of loop iterations) which make the scenario possible. These results can be used to perform optimized service matching or trigger preventive adaptation or healing.
Resumo:
Embedded context management in resource-constrained devices (e.g. mobile phones, autonomous sensors or smart objects) imposes special requirements in terms of lightness for data modelling and reasoning. In this paper, we explore the state-of-the-art on data representation and reasoning tools for embedded mobile reasoning and propose a light inference system (LIS) aiming at simplifying embedded inference processes offering a set of functionalities to avoid redundancy in context management operations. The system is part of a service-oriented mobile software framework, conceived to facilitate the creation of context-aware applications?it decouples sensor data acquisition and context processing from the application logic. LIS, composed of several modules, encapsulates existing lightweight tools for ontology data management and rule-based reasoning, and it is ready to run on Java-enabled handheld devices. Data management and reasoning processes are designed to handle a general ontology that enables communication among framework components. Both the applications running on top of the framework and the framework components themselves can configure the rule and query sets in order to retrieve the information they need from LIS. In order to test LIS features in a real application scenario, an ?Activity Monitor? has been designed and implemented: a personal health-persuasive application that provides feedback on the user?s lifestyle, combining data from physical and virtual sensors. In this case of use, LIS is used to timely evaluate the user?s activity level, to decide on the convenience of triggering notifications and to determine the best interface or channel to deliver these context-aware alerts.