555 resultados para leverage
Resumo:
This is the second part of a study investigating a model-based transient calibration process for diesel engines. The first part addressed the data requirements and data processing required for empirical transient emission and torque models. The current work focuses on modelling and optimization. The unexpected result of this investigation is that when trained on transient data, simple regression models perform better than more powerful methods such as neural networks or localized regression. This result has been attributed to extrapolation over data that have estimated rather than measured transient air-handling parameters. The challenges of detecting and preventing extrapolation using statistical methods that work well with steady-state data have been explained. The concept of constraining the distribution of statistical leverage relative to the distribution of the starting solution to prevent extrapolation during the optimization process has been proposed and demonstrated. Separate from the issue of extrapolation is preventing the search from being quasi-static. Second-order linear dynamic constraint models have been proposed to prevent the search from returning solutions that are feasible if each point were run at steady state, but which are unrealistic in a transient sense. Dynamic constraint models translate commanded parameters to actually achieved parameters that then feed into the transient emission and torque models. Combined model inaccuracies have been used to adjust the optimized solutions. To frame the optimization problem within reasonable dimensionality, the coefficients of commanded surfaces that approximate engine tables are adjusted during search iterations, each of which involves simulating the entire transient cycle. The resulting strategy, different from the corresponding manual calibration strategy and resulting in lower emissions and efficiency, is intended to improve rather than replace the manual calibration process.
Resumo:
This study examines the effect of democratization on a key education reform across three Mexican states. Previous scholarship has found a positive effect of electoral competition on social spending, as leaders seek to improve their reelection prospects by delivering services to voters. However, the evidence presented here indicates that more money has not meant better educational outcomes in Mexico. Rather, new and vulnerable elected leaders are especially susceptible to the demands of powerful interest groups at the expense of accountability to constituents. In this case, the dominant teachers' union has used its leverage to exact greater control over the country's resource-rich merit pay program for teachers. It has exploited this control to increase salaries and decrease standards for advancement up the remuneration ladder. The evidence suggests that increased electoral competition has led to the empowerment of entrenched interests rather than voters, with an overall negative effect on education.
Resumo:
Three-dimensional flow visualization plays an essential role in many areas of science and engineering, such as aero- and hydro-dynamical systems which dominate various physical and natural phenomena. For popular methods such as the streamline visualization to be effective, they should capture the underlying flow features while facilitating user observation and understanding of the flow field in a clear manner. My research mainly focuses on the analysis and visualization of flow fields using various techniques, e.g. information-theoretic techniques and graph-based representations. Since the streamline visualization is a popular technique in flow field visualization, how to select good streamlines to capture flow patterns and how to pick good viewpoints to observe flow fields become critical. We treat streamline selection and viewpoint selection as symmetric problems and solve them simultaneously using the dual information channel [81]. To the best of my knowledge, this is the first attempt in flow visualization to combine these two selection problems in a unified approach. This work selects streamline in a view-independent manner and the selected streamlines will not change for all viewpoints. My another work [56] uses an information-theoretic approach to evaluate the importance of each streamline under various sample viewpoints and presents a solution for view-dependent streamline selection that guarantees coherent streamline update when the view changes gradually. When projecting 3D streamlines to 2D images for viewing, occlusion and clutter become inevitable. To address this challenge, we design FlowGraph [57, 58], a novel compound graph representation that organizes field line clusters and spatiotemporal regions hierarchically for occlusion-free and controllable visual exploration. We enable observation and exploration of the relationships among field line clusters, spatiotemporal regions and their interconnection in the transformed space. Most viewpoint selection methods only consider the external viewpoints outside of the flow field. This will not convey a clear observation when the flow field is clutter on the boundary side. Therefore, we propose a new way to explore flow fields by selecting several internal viewpoints around the flow features inside of the flow field and then generating a B-Spline curve path traversing these viewpoints to provide users with closeup views of the flow field for detailed observation of hidden or occluded internal flow features [54]. This work is also extended to deal with unsteady flow fields. Besides flow field visualization, some other topics relevant to visualization also attract my attention. In iGraph [31], we leverage a distributed system along with a tiled display wall to provide users with high-resolution visual analytics of big image and text collections in real time. Developing pedagogical visualization tools forms my other research focus. Since most cryptography algorithms use sophisticated mathematics, it is difficult for beginners to understand both what the algorithm does and how the algorithm does that. Therefore, we develop a set of visualization tools to provide users with an intuitive way to learn and understand these algorithms.
Resumo:
In the twenty-first century, the issue of privacy--particularly the privacy of individuals with regard to their personal information and effects--has become highly contested terrain, producing a crisis that affects both national and global social formations. This crisis, or problematic, characterizes a particular historical conjuncture I term the namespace. Using cultural studies and the theory of articulation, I map the emergent ways that the namespace articulates economic, juridical, political, cultural, and technological forces, materials, practices and protocols. The cohesive articulation of the namespace requires that privacy be reframed in ways that make its diminution seem natural and inevitable. In the popular media, privacy is often depicted as the price we pay as citizens and consumers for security and convenience, respectively. This discursive ideological shift supports and underwrites the interests of state and corporate actors who leverage the ubiquitous network of digitally connected devices to engender a new regime of informational surveillance, or dataveillance. The widespread practice of dataveillance represents a strengthening of the hegemonic relations between these actors--each shares an interest in promoting an emerging surveillance society, a burgeoning security politics, and a growing information economy--that further empowers them to capture and store the personal information of citizens/consumers. In characterizing these shifts and the resulting crisis, I also identify points of articulation vulnerable to rearticulation and suggest strategies for transforming the namespace in ways that might empower stronger protections for privacy and related civil rights.
Resumo:
Mainstream IDEs such as Eclipse support developers in managing software projects mainly by offering static views of the source code. Such a static perspective neglects any information about runtime behavior. However, object-oriented programs heavily rely on polymorphism and late-binding, which makes them difficult to understand just based on their static structure. Developers thus resort to debuggers or profilers to study the system's dynamics. However, the information provided by these tools is volatile and hence cannot be exploited to ease the navigation of the source space. In this paper we present an approach to augment the static source perspective with dynamic metrics such as precise runtime type information, or memory and object allocation statistics. Dynamic metrics can leverage the understanding for the behavior and structure of a system. We rely on dynamic data gathering based on aspects to analyze running Java systems. By solving concrete use cases we illustrate how dynamic metrics directly available in the IDE are useful. We also comprehensively report on the efficiency of our approach to gather dynamic metrics.
Resumo:
This paper describes a simple way to integrate the debt tax shield into an accounting-based valuation model. The market value of equity is determined by forecasting residual operating income, which is calculated by charging operating income for the operating assets at a required return that accounts for the tax benefit that comes from borrowing to raise cash for the operations. The model assumes that the firm maintains a deterministic financial leverage ratio, which tends to converge quickly to typical steady-state levels over time. From a practical point of view, this characteristic is of particular help, because it allows a continuing value calculation at the end of a short forecast period.
Resumo:
The use of virtual learning environments in Higher Education (HE) has been growing in Portugal, driven by the Bologna Process. An example is the use of Learning Management Systems (LMS) that translates an opportunity to leverage the use of technological advances in the educational process. The progress of information and communication technologies (ICT) coupled with the great development of Internet has brought significant challenges to educators that require a thorough knowledge of their implementation process. These field notes present the results of a survey among teachers of a private HE institution in its use of Moodle as a tool to support face-to-face teaching. A research methodology essentially of exploratory nature based on a questionnaire survey, supported by statistical treatment allowed to detect motivations, type of use and perceptions of teachers in relation to this kind of tool. The results showed that most teachers, by a narrow margin (58%), had not changed their pedagogical practice as a consequence of using Moodle. Among those that did 67% attended institutional internal training. Some of the results obtained suggest further investigation and provide guidelines to plan future internal training.
Resumo:
If workers are wealth maximizers, codetermination should lead to less risky investments, smaller dividends, reduced firm leverage, higher and more stable salaries, and more capital-intensive production processes. Unless codetermination also increases productivity by raising wokers' morale and satisfaction or reduces information asymmetries within the firm, shareholder wealth and firm value will decline. An analysis of West Germany's case, however, indicates that codetermination has little, if any, effect on corporate operations and performance.
Resumo:
Researchers suggest that personalization on the Semantic Web adds up to a Web 3.0 eventually. In this Web, personalized agents process and thus generate the biggest share of information rather than humans. In the sense of emergent semantics, which supplements traditional formal semantics of the Semantic Web, this is well conceivable. An emergent Semantic Web underlying fuzzy grassroots ontology can be accomplished through inducing knowledge from users' common parlance in mutual Web 2.0 interactions [1]. These ontologies can also be matched against existing Semantic Web ontologies, to create comprehensive top-level ontologies. On the Web, if augmented with information in the form of restrictions andassociated reliability (Z-numbers) [2], this collection of fuzzy ontologies constitutes an important basis for an implementation of Zadeh's restriction-centered theory of reasoning and computation (RRC) [3]. By considering real world's fuzziness, RRC differs from traditional approaches because it can handle restrictions described in natural language. A restriction is an answer to a question of the value of a variable such as the duration of an appointment. In addition to mathematically well-defined answers, RRC can likewise deal with unprecisiated answers as "about one hour." Inspired by mental functions, it constitutes an important basis to leverage present-day Web efforts to a natural Web 3.0. Based on natural language information, RRC may be accomplished with Z-number calculation to achieve a personalized Web reasoning and computation. Finally, through Web agents' understanding of natural language, they can react to humans more intuitively and thus generate and process information.
Resumo:
In this thesis, we develop an adaptive framework for Monte Carlo rendering, and more specifically for Monte Carlo Path Tracing (MCPT) and its derivatives. MCPT is attractive because it can handle a wide variety of light transport effects, such as depth of field, motion blur, indirect illumination, participating media, and others, in an elegant and unified framework. However, MCPT is a sampling-based approach, and is only guaranteed to converge in the limit, as the sampling rate grows to infinity. At finite sampling rates, MCPT renderings are often plagued by noise artifacts that can be visually distracting. The adaptive framework developed in this thesis leverages two core strategies to address noise artifacts in renderings: adaptive sampling and adaptive reconstruction. Adaptive sampling consists in increasing the sampling rate on a per pixel basis, to ensure that each pixel value is below a predefined error threshold. Adaptive reconstruction leverages the available samples on a per pixel basis, in an attempt to have an optimal trade-off between minimizing the residual noise artifacts and preserving the edges in the image. In our framework, we greedily minimize the relative Mean Squared Error (rMSE) of the rendering by iterating over sampling and reconstruction steps. Given an initial set of samples, the reconstruction step aims at producing the rendering with the lowest rMSE on a per pixel basis, and the next sampling step then further reduces the rMSE by distributing additional samples according to the magnitude of the residual rMSE of the reconstruction. This iterative approach tightly couples the adaptive sampling and adaptive reconstruction strategies, by ensuring that we only sample densely regions of the image where adaptive reconstruction cannot properly resolve the noise. In a first implementation of our framework, we demonstrate the usefulness of our greedy error minimization using a simple reconstruction scheme leveraging a filterbank of isotropic Gaussian filters. In a second implementation, we integrate a powerful edge aware filter that can adapt to the anisotropy of the image. Finally, in a third implementation, we leverage auxiliary feature buffers that encode scene information (such as surface normals, position, or texture), to improve the robustness of the reconstruction in the presence of strong noise.
Resumo:
We present a generalized framework for gradient-domain Metropolis rendering, and introduce three techniques to reduce sampling artifacts and variance. The first one is a heuristic weighting strategy that combines several sampling techniques to avoid outliers. The second one is an improved mapping to generate offset paths required for computing gradients. Here we leverage the properties of manifold walks in path space to cancel out singularities. Finally, the third technique introduces generalized screen space gradient kernels. This approach aligns the gradient kernels with image structures such as texture edges and geometric discontinuities to obtain sparser gradients than with the conventional gradient kernel. We implement our framework on top of an existing Metropolis sampler, and we demonstrate significant improvements in visual and numerical quality of our results compared to previous work.
Resumo:
The unprecedented success of social networking sites (SNSs) has been recently overshadowed by concerns about privacy risks. As SNS users grow weary of privacy breaches and thus develop distrust, they may restrict or even terminate their platform activities. In the long run, these developments endanger SNS platforms’ financial viability and undermine their ability to create individual and social value. By applying a justice perspective, this study aims to understand the means at the disposal of SNS providers to leverage the privacy concerns and trusting beliefs of their users—two important determinants of user participation on SNSs. Considering that SNSs have a global appeal, empirical tests assess the effectiveness of justice measures for three culturally distinct countries: Germany, Russia and Morocco. The results indicate that these measures are particularly suited to address trusting beliefs of SNS audience. Specifically, in all examined countries, procedural justice and the awareness dimension of informational justice improve perceptions of trust in the SNS provider. Privacy concerns, however, are not as easy to manage, because the impact of justice-based measures on privacy concerns is not universal. Beyond theoretical value, this research offers valuable practical insights into the use of justice-based measures to promote trust and mitigate privacy concerns in a cross-cultural setting.
Resumo:
In recent years developing countries have faced highly dynamic changes affecting their natural resource base and their potential for development. Taking into account these changes in the development context, InfoResources initiated a critical reassessment of the results of InfoResources Trends 2005 and again invited experts from around the world to assess trends that least developed countries are likely to be facing by 2025. The unanimous signal conveyed by the international experts for this assessment is alarming: The degradation of natural resources is progressing. By 2025 it will reach a point where livelihoods in least developing countries will be significantly threatened and an increasing number of agro-ecosystems will lose their capacity to deliver important services. Expected positive social trends will not suffice as leverage to reverse the degradation of natural resources and thus alleviate poverty and hunger. However, the present reassessment clearly reveals that a change in thinking and a shift in paradigms have begun to take place. However, a turnaround can only succeed if the emerging awareness of the need to reorient policy-making and the economy is followed by concrete action. It will be crucial that policies and institutions regain regulating power over greedy economic forces. This reassessment does not claim to be comprehensive. However, the present publication, which synthesises the experts’ inputs, aims at providing food for thought and initiating discussions.
Resumo:
In land systems, equitably managing trade-offs between planetary boundaries and human development needs represents a grand challenge in sustainability oriented initiatives. Informing such initiatives requires knowledge about the nexus between land use, poverty, and environment. This paper presents results from Lao PDR, where we combined nationwide spatial data on land use types and the environmental state of landscapes with village-level poverty indicators. Our analysis reveals two general but contrasting trends. First, landscapes with paddy or permanent agriculture allow a greater number of people to live in less poverty but come at the price of a decrease in natural vegetation cover. Second, people practising extensive swidden agriculture and living in intact environments are often better off than people in degraded paddy or permanent agriculture. As poverty rates within different landscape types vary more than between landscape types, we cannot stipulate a land use–poverty–environment nexus. However, the distinct spatial patterns or configurations of these rates point to other important factors at play. Drawing on ethnicity as a proximate factor for endogenous development potentials and accessibility as a proximate factor for external influences, we further explore these linkages. Ethnicity is strongly related to poverty in all land use types almost independently of accessibility, implying that social distance outweighs geographic or physical distance. In turn, accessibility, almost a precondition for poverty alleviation, is mainly beneficial to ethnic majority groups and people living in paddy or permanent agriculture. These groups are able to translate improved accessibility into poverty alleviation. Our results show that the concurrence of external influences with local—highly contextual—development potentials is key to shaping outcomes of the land use–poverty–environment nexus. By addressing such leverage points, these findings help guide more effective development interventions. At the same time, they point to the need in land change science to better integrate the understanding of place-based land indicators with process-based drivers of land use change.
Resumo:
Following the collapse of the communist regime in 1989, Bulgaria has undergone dramatic political, economic and social transformations. The transition process of the past two decades was characterized by several reforms to support democratisation of the political system and the functioning of a free-market economy. Since 1992, Switzerland has been active in Bulgaria providing assistance to the transition process, with support to Sustainable Management of Natural Resources (SMNR) starting in 1995. The SMNR Capitalisation of Experiences (CapEx) took place between March and September 2007, in the context of SDC phasing out its programmes in Bulgaria by the end of 2007 due to the country’s accession to the European Union. The CapEx exercise has culminated in the identification of 17 lessons learned. In the view of the CapEx team, many of these lessons are relevant for countries that are in the process of joining the EU, facing similar democratisation challenges as Bulgaria. Overall, the Swiss SMNR projects have been effective entry points to support areas that are crucial to democratic transitions, namely participation in public goods management, decentralisation, human capacity development in research and management, and preparation for EU membership. The specificity of the Swiss support stems from an approach that combines a long-term commitment with a clear thematic focus (forestry, biodiversity conservation and organic agriculture). The multistakeholder approach and diversification of support between local, regional and national levels are also important elements that contributed to make a difference in relation to other donors supporting the Bulgarian transition. At the institutional level, there are a number of challenges where the contribution of SMNR activities was only modest, namely improving the legal framework and creating more transparency and accountability, both of which are time and resource-consuming processes. In addition, the emergence of competent and sustainable non-government organisations (NGOs) is a complex process that requires support to membership based organisations, a challenge that was hardly met in the case of SMNR. Finally, reform of government institutions involved in management of natural resources is difficult to achieve via project support only, as it requires leverage and commitment at the level of policy dialogue. At the programme management level, the CapEx team notes that corruption was not systematically addressed in SMNR projects, indicating that more attention should be given to this issue at the outset of any new project.