817 resultados para Proxy servers
Resumo:
Object-oriented programming is a widely adopted paradigm for desktop software development. This paradigm partitions software into separate entities, objects, which consist of data and related procedures used to modify and inspect it. The paradigm has evolved during the last few decades to emphasize decoupling between object implementations, via means such as explicit interface inheritance and event-based implicit invocation. Inter-process communication (IPC) technologies allow applications to interact with each other. This enables making software distributed across multiple processes, resulting in a modular architecture with benefits in resource sharing, robustness, code reuse and security. The support for object-oriented programming concepts varies between IPC systems. This thesis is focused on the D-Bus system, which has recently gained a lot of users, but is still scantily researched. D-Bus has support for asynchronous remote procedure calls with return values and a content-based publish/subscribe event delivery mechanism. In this thesis, several patterns for method invocation in D-Bus and similar systems are compared. The patterns that simulate synchronous local calls are shown to be dangerous. Later, we present a state-caching proxy construct, which avoids the complexity of properly asynchronous calls for object inspection. The proxy and certain supplementary constructs are presented conceptually as generic object-oriented design patterns. The e ect of these patterns on non-functional qualities of software, such as complexity, performance and power consumption, is reasoned about based on the properties of the D-Bus system. The use of the patterns reduces complexity, but maintains the other qualities at a good level. Finally, we present currently existing means of specifying D-Bus object interfaces for the purposes of code and documentation generation. The interface description language used by the Telepathy modular IM/VoIP framework is found to be an useful extension of the basic D-Bus introspection format.
Resumo:
Abstract—This paper discusses existing military capability models and proposes a comprehensive capability meta-model (CCMM) which unites the existing capability models into an integrated and hierarchical whole. The Zachman Framework for Enterprise Architecture is used as a structure for the CCMM. The CCMM takes into account the abstraction level, the primary area of application, stakeholders, intrinsic process, and life cycle considerations of each existing capability model, and shows how the models relate to each other. The validity of the CCMM was verified through a survey of subject matter experts. The results suggest that the CCMM is of practical value to various capability stakeholders in many ways, such as helping to improve communication between the different capability communities.
Resumo:
Abstract - This paper reviews existing military capability models and the capability life cycle. It proposes a holistic capability life-cycle model (HCLCM) that combines capability systems with related capability models. ISO 15288 standard is used as a framework to construct the HCLCM. The HCLCM also shows how capability models and systems relate to each other throughout the capability life cycle. The main contribution of this paper is conceptual in nature. The model complements the existing, but still evolving, understanding of the military capability life cycle in a holistic and systemic way. The model also increases understanding and facilitates communication among various military capability stakeholders.
Resumo:
Abstract—Concept development and experimentation (CD&E) plays an important role in driving strategic transformation in the military community. Defence architecture frameworks, such as the NATO architecture framework, are considered excellent means to support CD&E. There is not much empirical evidence, however, to indicate how enterprise architectures (EA) are applied in the military community or particularly in military CD&E. Consequently, this paper describes and discusses empirical application of the EA approach in CD&E. The research method in the paper is a case study. Situational method engineering (SiME) is used as a framework to adapt the EA approach to the case project of the paper. The findings of the paper suggest that the EA is applicable to CD&E work, although all aspects of the original concept could not be expressed in the EA model of the case project. The results also show that the SiME method can support in applying the EA framework to the CD&E in the case project.
Resumo:
Longitudinal surveys are increasingly used to collect event history data on person-specific processes such as transitions between labour market states. Surveybased event history data pose a number of challenges for statistical analysis. These challenges include survey errors due to sampling, non-response, attrition and measurement. This study deals with non-response, attrition and measurement errors in event history data and the bias caused by them in event history analysis. The study also discusses some choices faced by a researcher using longitudinal survey data for event history analysis and demonstrates their effects. These choices include, whether a design-based or a model-based approach is taken, which subset of data to use and, if a design-based approach is taken, which weights to use. The study takes advantage of the possibility to use combined longitudinal survey register data. The Finnish subset of European Community Household Panel (FI ECHP) survey for waves 1–5 were linked at person-level with longitudinal register data. Unemployment spells were used as study variables of interest. Lastly, a simulation study was conducted in order to assess the statistical properties of the Inverse Probability of Censoring Weighting (IPCW) method in a survey data context. The study shows how combined longitudinal survey register data can be used to analyse and compare the non-response and attrition processes, test the missingness mechanism type and estimate the size of bias due to non-response and attrition. In our empirical analysis, initial non-response turned out to be a more important source of bias than attrition. Reported unemployment spells were subject to seam effects, omissions, and, to a lesser extent, overreporting. The use of proxy interviews tended to cause spell omissions. An often-ignored phenomenon classification error in reported spell outcomes, was also found in the data. Neither the Missing At Random (MAR) assumption about non-response and attrition mechanisms, nor the classical assumptions about measurement errors, turned out to be valid. Both measurement errors in spell durations and spell outcomes were found to cause bias in estimates from event history models. Low measurement accuracy affected the estimates of baseline hazard most. The design-based estimates based on data from respondents to all waves of interest and weighted by the last wave weights displayed the largest bias. Using all the available data, including the spells by attriters until the time of attrition, helped to reduce attrition bias. Lastly, the simulation study showed that the IPCW correction to design weights reduces bias due to dependent censoring in design-based Kaplan-Meier and Cox proportional hazard model estimators. The study discusses implications of the results for survey organisations collecting event history data, researchers using surveys for event history analysis, and researchers who develop methods to correct for non-sampling biases in event history data.
Resumo:
Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
One of the main challenges in Software Engineering is to cope with the transition from an industry based on software as a product to software as a service. The field of Software Engineering should provide the necessary methods and tools to develop and deploy new cost-efficient and scalable digital services. In this thesis, we focus on deployment platforms to ensure cost-efficient scalability of multi-tier web applications and on-demand video transcoding service for different types of load conditions. Infrastructure as a Service (IaaS) clouds provide Virtual Machines (VMs) under the pay-per-use business model. Dynamically provisioning VMs on demand allows service providers to cope with fluctuations on the number of service users. However, VM provisioning must be done carefully, because over-provisioning results in an increased operational cost, while underprovisioning leads to a subpar service. Therefore, our main focus in this thesis is on cost-efficient VM provisioning for multi-tier web applications and on-demand video transcoding. Moreover, to prevent provisioned VMs from becoming overloaded, we augment VM provisioning with an admission control mechanism. Similarly, to ensure efficient use of provisioned VMs, web applications on the under-utilized VMs are consolidated periodically. Thus, the main problem that we address is cost-efficient VM provisioning augmented with server consolidation and admission control on the provisioned VMs. We seek solutions for two types of applications: multi-tier web applications that follow the request-response paradigm and on-demand video transcoding that is based on video streams with soft realtime constraints. Our first contribution is a cost-efficient VM provisioning approach for multi-tier web applications. The proposed approach comprises two subapproaches: a reactive VM provisioning approach called ARVUE and a hybrid reactive-proactive VM provisioning approach called Cost-efficient Resource Allocation for Multiple web applications with Proactive scaling. Our second contribution is a prediction-based VM provisioning approach for on-demand video transcoding in the cloud. Moreover, to prevent virtualized servers from becoming overloaded, the proposed VM provisioning approaches are augmented with admission control approaches. Therefore, our third contribution is a session-based admission control approach for multi-tier web applications called adaptive Admission Control for Virtualized Application Servers. Similarly, the fourth contribution in this thesis is a stream-based admission control and scheduling approach for on-demand video transcoding called Stream-Based Admission Control and Scheduling. Our fifth contribution is a computation and storage trade-o strategy for cost-efficient video transcoding in cloud computing. Finally, the sixth and the last contribution is a web application consolidation approach, which uses Ant Colony System to minimize the under-utilization of the virtualized application servers.
Resumo:
We have developed a software called pp-Blast that uses the publicly available Blast package and PVM (parallel virtual machine) to partition a multi-sequence query across a set of nodes with replicated or shared databases. Benchmark tests show that pp-Blast running in a cluster of 14 PCs outperformed conventional Blast running in large servers. In addition, using pp-Blast and the cluster we were able to map all human cDNAs onto the draft of the human genome in less than 6 days. We propose here that the cost/benefit ratio of pp-Blast makes it appropriate for large-scale sequence analysis. The source code and configuration files for pp-Blast are available at http://www.ludwig.org.br/biocomp/tools/pp-blast.
Resumo:
The aim of this study was to examine community and individual approaches in responses to mass violence after the school shooting incidents in Jokela (November 2007) and Kauhajoki (September 2008), Finland. In considering the community approach, responses to any shocking criminal event may have integrative, as well as disintegrative effects, within the neighborhood. The integration perspective argues that a heinous criminal event within one’s community is a matter of offence to collectively held feelings and beliefs, and increases perceived solidarity; whereas the disintegration perspective suggests that a criminal event weakens the social fabric of community life by increasing fear of crime and mistrust among locals. In considering the individual approach, socio-demographic factors, such as one’s gender, are typically significant indicators, which explain variation in fear of crime. Beyond this, people are not equally exposed to violent crime and therefore prior victimization and event related experiences may further explain why people differ in their sensitivity to risk from mass violence. Finally, factors related to subjective mental health, such as depressed mood, are also likely to moderate individual differences in responses to mass violence. This study is based on the correlational design of four independent cross-sectional postal surveys. The sampling frames (N=700) for the surveys were the Finnish speaking adult population aged 18–74-years. The first mail survey in Jokela (n=330) was conducted between May and June 2008, approximately six months from the shooting incident at the local high-school. The second Jokela survey (n=278) was conducted in May–June of 2009, 18 months removed from the incident. The first survey in Kauhajoki (n=319) was collected six months after the incident at the local University of Applied Sciences, March– April 2009, and the second (n=339) in March–April 2010, approximately 18 months after the event. Linear and ordinal regression and path analysis are used as methods of analyses. The school shootings in Jokela and Kauhajoki were extremely disturbing events, which deeply affected the communities involved. However, based on the results collected, community responses to mass violence between the two localities were different. An increase in social solidarity appears to apply in the case of the Jokela community, but not in the case of the Kauhajoki community. Thus a criminal event does not necessarily impact the wider community. Every empirical finding is most likely related to different contextual and event-specific factors. Beyond this, community responses to mass violence in Jokela also indicated that the incident was related to a more general sense of insecurity and was also associating with perceived community deterioration and further suggests that responses to mass violence may have both integrating and disintegrating effects. Moreover, community responses to mass violence should also be examined in relation to broader social anxieties and as a proxy for generalized insecurity. Community response is an emotive process and incident related feelings are perhaps projected onto other identifiable concerns. However, this may open the door for social errors and, despite integrative effects, this may also have negative consequences within the neighborhood. The individual approach suggests that women are more fearful than men when a threat refers to violent crime. Young women (aged 18–34) were the most worried age and gender group as concerns perception of threat from mass violence at schools compared to young men (aged 18–34), who were also the least worried age and gender group when compared to older men. It was also found that concerns about mass violence were stronger among respondents with the lowest level of monthly household income compared to financially better-off respondents. Perhaps more importantly, responses to mass violence were affected by the emotional proximity to the event; and worry about the recurrence of school shootings was stronger among respondents who either were a parent of a school-aged child, or knew a victim. Finally, results indicate that psychological wellbeing is an important individual level factor. Respondents who expressed depressed mood consistently expressed their concerns about mass violence and community deterioration. Systematic assessments of the impact of school shooting events on communities are therefore needed. This requires the consolidation of community and individual approaches. Comparative study designs would further benefit from international collaboration across disciplines. Extreme school violence has also become a national concern and deeper understanding of crime related anxieties in contemporary Finland also requires community-based surveys.
Resumo:
Recent advances in Information and Communication Technology (ICT), especially those related to the Internet of Things (IoT), are facilitating smart regions. Among many services that a smart region can offer, remote health monitoring is a typical application of IoT paradigm. It offers the ability to continuously monitor and collect health-related data from a person, and transmit the data to a remote entity (for example, a healthcare service provider) for further processing and knowledge extraction. An IoT-based remote health monitoring system can be beneficial in rural areas belonging to the smart region where people have limited access to regular healthcare services. The same system can be beneficial in urban areas where hospitals can be overcrowded and where it may take substantial time to avail healthcare. However, this system may generate a large amount of data. In order to realize an efficient IoT-based remote health monitoring system, it is imperative to study the network communication needs of such a system; in particular the bandwidth requirements and the volume of generated data. The thesis studies a commercial product for remote health monitoring in Skellefteå, Sweden. Based on the results obtained via the commercial product, the thesis identified the key network-related requirements of a typical remote health monitoring system in terms of real-time event update, bandwidth requirements and data generation. Furthermore, the thesis has proposed an architecture called IReHMo - an IoT-based remote health monitoring architecture. This architecture allows users to incorporate several types of IoT devices to extend the sensing capabilities of the system. Using IReHMo, several IoT communication protocols such as HTTP, MQTT and CoAP has been evaluated and compared against each other. Results showed that CoAP is the most efficient protocol to transmit small size healthcare data to the remote servers. The combination of IReHMo and CoAP significantly reduced the required bandwidth as well as the volume of generated data (up to 56 percent) compared to the commercial product. Finally, the thesis conducted a scalability analysis, to determine the feasibility of deploying the combination of IReHMo and CoAP in large numbers in regions in north Sweden.
Resumo:
Many-core systems provide a great potential in application performance with the massively parallel structure. Such systems are currently being integrated into most parts of daily life from high-end server farms to desktop systems, laptops and mobile devices. Yet, these systems are facing increasing challenges such as high temperature causing physical damage, high electrical bills both for servers and individual users, unpleasant noise levels due to active cooling and unrealistic battery drainage in mobile devices; factors caused directly by poor energy efficiency. Power management has traditionally been an area of research providing hardware solutions or runtime power management in the operating system in form of frequency governors. Energy awareness in application software is currently non-existent. This means that applications are not involved in the power management decisions, nor does any interface between the applications and the runtime system to provide such facilities exist. Power management in the operating system is therefore performed purely based on indirect implications of software execution, usually referred to as the workload. It often results in over-allocation of resources, hence power waste. This thesis discusses power management strategies in many-core systems in the form of increasing application software awareness of energy efficiency. The presented approach allows meta-data descriptions in the applications and is manifested in two design recommendations: 1) Energy-aware mapping 2) Energy-aware execution which allow the applications to directly influence the power management decisions. The recommendations eliminate over-allocation of resources and increase the energy efficiency of the computing system. Both recommendations are fully supported in a provided interface in combination with a novel power management runtime system called Bricktop. The work presented in this thesis allows both new- and legacy software to execute with the most energy efficient mapping on a many-core CPU and with the most energy efficient performance level. A set of case study examples demonstrate realworld energy savings in a wide range of applications without performance degradation.
Resumo:
This paper investigates the role that economic variables play in the determination of happiness, using reported happiness as a proxy to individual well-being. We use microdata extracted from the World Values Survey for five countries, emphasizing the Brazilian case. Our findings suggest that there is a positive and significant correlation between happiness and income. Unemployment is also a large source of unhappiness. In most cases, happiness appears to be positively correlated to being married. Moreover, happiness is apparently U-shaped in age (minimizing at 50's).
Resumo:
Infrastructure and productivity in Brazil. This article analyses the relationship between infrastructure and total factor productivity (TFP) in Brazil during the second half of the twenty century. Public capital is used as a proxy for infrastructure capital. The hypothesis to be tested is that an increase in infrastructure - more than than a rise in the private capital stock - has a positive effect on productivity on the long run. In that sense, it was used the Johansen methodology for testing the cointegration between TFP and the public/private capital ratio. In fact, it was found that this complementary relation (public-private) helps in explanning TFP's path from 1950 to 2000. The results were robust to different measures of productivity and the public/private ratio. In addition, the short (medium) run analysis has indicated that shocks in this ratio have a significant effect over the TFP, but the opposite is not true. Therefore, the cuts in infrastructure investment could be a possible explanation for the TFP's fall during the 70's and 80's.
Resumo:
Tämän diplomityön tavoitteena oli selvittää hyödyttääkö Stora Enso Metsää tietojärjestelmien siirto perinteisistä konesalipalveluista pilvipalveluihin. Stora Enso Metsällä on paljon erilaisia suunnitteluun liittyviä eräajoja. Joitakin niistä ajetaan vain muutamia kertoja vuodessa kuten tehtaiden puuntarve, toisia muutaman kerran kuussa kuten kuljetusten malliajot tai muutaman kerran viikossa ajettava korjuun suunnittelu. Niissä tapauksissa palvelimet voidaan käynnistää erikseen ja käyttää niitä vain silloin, kun niitä oikeasti tarvitaan. Työn lopputuloksena havaittiin, että pilvipalveluiden käyttöönotto tuo kustannussäästöjä ja palveluiden hallintaan joustavuutta. Itsepalveluna toteutettuna palvelimia voidaan hallinnoida joustavasti kustannusten säästämiseksi. Pilvipalveluilla voidaan nopeuttaa projektien läpimenoa ja kohdentaa käyttökatkot tarkemmin koska siihen ei välttämättä tarvita toimittajan työtä lainkaan. Loppujen lopuksi asiakkaan on erittäin vaikea tietää kuinka paljon kustannuksia on jaettu eri tavalla eri palvelujen välillä.
Resumo:
Consolidated democracy and size of the State. Common sense suggests that the more consolidated democracies and advanced economies tend to be more efficient and produce smaller States. What is observed in practice, however, is a positive correlation between "democratic consolidation" and "tax burden" (as a proxy for"size of Government"). This finding, while not expressing any causal relationship between the two variables, is an evidence that a more republican and democratic State, as defined in Bresser-Pereira, must be able to provide, effectively and efficiently, broader public services with better quality. This is, in consolidated democracies, the State should not be small.