922 resultados para decentralized and centralized HRM
Resumo:
This PhD thesis is an empirical research project in the field of modern Polish history. The thesis focuses on Solidarity, the Network and the idea of workers’ self-management. In addition, the thesis is based on an in-depth analysis of Solidarity archival material. The Solidarity trade union was born in August 1980 after talks between the communist government and strike leaders at the Gdansk Lenin Shipyards. In 1981 a group called the Network rose up, due to cooperation between Poland’s great industrial factory plants. The Network grew out of Solidarity; it was made up of Solidarity activists, and the group acted as an economic partner to the union. The Network was the base of a grass-roots, nationwide workers’ self-management movement. Solidarity and the self-management movement were crushed by the imposition of Martial Law in December 1981. Solidarity revived itself immediately, and the union created an underground society. The Network also revived in the underground, and it continued to promote self-management activity where this was possible. When Solidarity regained its legal status in April 1989, workers’ self-management no longer had the same importance in the union. Solidarity’s new politico-economic strategy focused on free markets, foreign investment and privatization. This research project ends in July 1990, when the new Solidarity-backed government enacted a privatization law. The government decided to transform the property ownership structure through a centralized privatization process, which was a blow for supporters of workers’ self-management. This PhD thesis provides new insight into the evolution of the Solidarity union from 1980-1990 by analyzing the fate of workers’ self-management. This project also examines the role of the Network throughout the 1980s. There is analysis of the important link between workers’ self-management and the core ideas of Solidarity. In addition, the link between political and economic reform is an important theme in this research project. The Network was aware that authentic workers’ self-management required reforms to the authoritarian political system. Workers’ self-management competed against other politico-economic ideas during the 1980s in Poland. The outcome of this competition between different reform concepts has shaped modern-day Polish politics, economics and society.
Resumo:
The universities rely on the Information Technology (IT) projects to support and enhance their core strategic objectives of teaching, research, and administration. The researcher’s literature review found that the level of IT funding and resources in the universities is not adequate to meet the IT demands. The universities received more IT project requests than they could execute. As such, universities must selectively fund the IT projects. The objectives of the IT projects in the universities vary. An IT project which benefits the teaching functions may not benefit the administrative functions. As such, the selection of an IT project is challenging in the universities. To aid with the IT decision making, many universities in the United States of America (USA) have formed the IT Governance (ITG) processes. ITG is an IT decision making and accountability framework whose purpose is to align the IT efforts in an organization with its strategic objectives, realize the value of the IT investments, meet the expected performance criteria, and manage the risks and the resources (Weil & Ross, 2004). ITG in the universities is relatively new, and it is not well known how the ITG processes are aiding the nonprofit universities in selecting the right IT projects, and managing the performance of these IT projects. This research adds to the body of knowledge regarding the IT project selection under the governance structure, the maturity of the IT projects, and the IT project performance in the nonprofit universities. The case study research methodology was chosen for this exploratory research. The convenience sampling was done to choose the cases from two large, research universities with decentralized colleges, and two small, centralized universities. The data were collected on nine IT projects from these four universities using the interviews and the university documents. The multi-case analysis was complemented by the Qualitative Comparative Analysis (QCA) to systematically analyze how the IT conditions lead to an outcome. This research found that the IT projects were selected in the centralized universities in a more informed manner. ITG was more authoritative in the small centralized universities; the ITG committees were formed by including the key decision makers, the decision-making roles, and responsibilities were better defined, and the frequency of ITG communication was higher. In the centralized universities, the business units and colleges brought the IT requests to ITG committees; which in turn prioritized the IT requests and allocated the funds and the resources to the IT projects. ITG committee members in the centralized universities had a higher awareness of the university-wide IT needs, and the IT projects tended to align with the strategic objectives. On the other hand, the decentralized colleges and business units in the large universities were influential and often bypassed the ITG processes. The decentralized units often chose the “pet” IT projects, and executed them within a silo, without bringing them to the attention of the ITG committees. While these IT projects met the departmental objectives, they did not always align with the university’s strategic objectives. This research found that the IT project maturity in the university could be increased by following the project management methodologies. The IT project management maturity was found higher in the IT projects executed by the centralized university, where a full-time project manager was assigned to manage the project, and the project manager had a higher expertise in the project management. The IT project executed under the guidance of the Project Management Office (PMO) has exhibited a higher project management maturity, as the PMO set the standards and controls for the project. The IT projects managed by the decentralized colleges by a part-time project manager with lower project management expertise have exhibited a lower project management maturity. The IT projects in the decentralized colleges were often managed by the business, or technical leads, who often lacked the project management expertise. This research found that higher the IT project management maturity, the better is the project performance. The IT projects with a higher maturity had a lower project delay, lower number of missed requirements, and lower number of IT system errors. This research found that the quality of IT decision in the university could be improved by centralizing the IT decision-making processes. The IT project management maturity could be improved by following the project management methodologies. The stakeholder management and communication were found critical for the success of the IT projects in the university. It is hoped that the findings from this research would help the university leaders make the strategic IT decisions, and the university’s IT project managers make the IT project decisions.
Resumo:
The paper we present is part of the research project "The professional identity of teacher studies", that we are development for last 3 years. The third phase of this research put the focus on the experience of job placement of novel teachers, graduated no more than 5 years. We work with focal groups and professional experience and teacher education accounts of teachers implied in this research. Also, for any teachers, we do biographical interviews to deepen on processes of construction of professional identity. In this paper we present the Ana Belen History, a female teacher of pre-school education with an experience of 4 years in school, working in a urban school with students in risk of exclusion. This school have a educative project, commitment with the neighbourhood, joint with the community and other social groups. Ana Belen story, from professional perspective, is linked with the social politic and educational commitment of this school. Our interest is focused in the comprehension of professional identity that Ana Belen has gone forging along her personal story and how her education and job placement has contributed for it. Also we are interested in knowing how early professional experiences have influenced in her professional development as teacher. Specifically we ask ourselves about what influence have for her professional identity, that her career starts in this particular school. In consequence, this paper leads us to question the current teacher education model. In particular we are interested on the kind of professional experience that have place and, so, the kinds of commitments that enables. We understand that frameworks in which professional education and experience have place are relevant to enable more or less transformer understandings about teaching. From conceptual perspective this paper adopts a socio-critical point of view (Gergen, 1985; Kincheloe, 2001; Wenger, 1988, etc.). We understand that teaching has to be analysing according work contexts and personal stories of teachers, because we face processes historical and collective building. Teaching is the result of action of their actors, over time, and in specific stage. So, with this research we intend to break with the old gap between pre and in-service education. We think that both of them are part of the same process and are formed according similar logical; although scenes change. We understand that they are part of a continuous process in which is giving sense to different and complex settings where teaching profession is built, but they are not differenced and independent stages. The teacher work, so, is subject to particular conditions, generated from such different fields as institutional, corporative, cultural, social, political, moral, etc. It displays a kaleidoscopic view on space, time, context, ... These are the axis in which the teaching is formed, from the complexity and heterogeneity. How this complexity is articulated results in different ways to face the teacher work, according different personal and professional stories. The teacher acts with subjects in instituted contexts from relationships he has with them, which gives a situated and contingent character. But, these contexts are strongly structured and ruled according centralized and generalized positions; which is, at the very least, paradoxical. Possibly, from our point of view, same of the crisis of teaching have to explain from this paradoxical perspective and the conflict, which characterize this job (Rivas, Leite y Cortés, 2011)
Resumo:
Modern writers like Djuna Barnes allow for the post-modern fluidity and explosion of sex and gender without finalizing either in a fixed form. Whereas the classical, archetypal androgyne is made up of two halves, one man and one woman; the deconstructed androgynous figure is not constituted of oppositional terms which would reflect an essential and unimpeachable truth. I reveal the way Djuna Barnes’ Nightwood not only thematizes the fluid androgyne, but also cleverly verbalizes David Wood’s perpetual and un-dischargable “debt” to extra-discursivity while poetically critiquing gender “appropriateness,” societal constraints, and the constitution of identity. Barnes presents a decentralized, ungrounded and non-prescribed world in Nightwood not only through her cross-dressing and androgynous characters, but also in her poetics, her assertion of the open-ended quality of language, and a strong imperative to negotiate our physical existence in a world of fluid gender and sexual boundaries.
Resumo:
Natural disasters in Argentina and Chile played a significant role in the state-formation and nation-building process (1822-1939). This dissertation explores state and society responses to earthquakes by studying public and private relief efforts reconstruction plans, crime and disorder, religious interpretations of catastrophes, national and transnational cultures of disaster, science and technology, and popular politics. Although Argentina and Chile share a political border and geological boundary, the two countries provide contrasting examples of state formation. Most disaster relief and reconstruction efforts emanated from the centralized Chilean state in Santiago. In Argentina, provincial officials made the majority of decisions in a catastrophe’s aftermath. Patriotic citizens raised money and collected clothing for survivors that helped to weave divergent regions together into a nation. The shared experience of earthquakes in all regions of Chile created a national disaster culture. Similarly, common disaster experiences, reciprocal relief efforts, and aid commissions linked Chileans with Western Argentine societies and generated a transnational disaster culture. Political leaders viewed reconstruction as opportunities to implement their visions for the nation on the urban landscape. These rebuilding projects threatened existing social hierarchies and often failed to come to fruition. Rebuilding brought new technologies from Europe to the Southern Cone. New building materials and systems, however, had to be adapted to the South American economic and natural environment. In a catastrophe’s aftermath, newspapers projected images of disorder and the authorities feared lawlessness and social unrest. Judicial and criminal records, however, show that crime often decreased after a disaster. Finally, nineteenth-century earthquakes heightened antagonism and conflict between the Catholic Church and the state. Conservative clergy asserted that disasters were divine punishments for the state’s anti-clerical measures and later railed against scientific explanations of earthquakes.
Resumo:
Many important problems in communication networks, transportation networks, and logistics networks are solved by the minimization of cost functions. In general, these can be complex optimization problems involving many variables. However, physicists noted that in a network, a node variable (such as the amount of resources of the nodes) is connected to a set of link variables (such as the flow connecting the node), and similarly each link variable is connected to a number of (usually two) node variables. This enables one to break the problem into local components, often arriving at distributive algorithms to solve the problems. Compared with centralized algorithms, distributed algorithms have the advantages of lower computational complexity, and lower communication overhead. Since they have a faster response to local changes of the environment, they are especially useful for networks with evolving conditions. This review will cover message-passing algorithms in applications such as resource allocation, transportation networks, facility location, traffic routing, and stability of power grids.
Resumo:
Prior resilience research typically focuses on either the individual or the organisational level of analysis, emphasises resilience in relation to day-to-day stressors rather than extreme events and is empirically under-developed. In response, our study inductively theorises about the relationships between individual and organisational resilience, drawing upon a large-scale study of resilience work in UK and French organisations. Our first-hand accounts of resilience work reveal the micro-processes involved in producing resilient organisations, and highlight the challenges experienced in doing resilience work in large organisations. We show that these micro-processes have significant implications for resilience at both individual and organisational levels, and draw implications for how HRM interventions can help to promote individual, and thus organisational, resilience.
Resumo:
OBJECTIVES: To compare oral health and hearing outcomes from the Clinical Standards Advisory Group (CSAG, 1998) and the Cleft Care UK (CCUK, 2013) studies. SETTING AND SAMPLE POPULATION: Two UK-based cross-sectional studies of 5-year-olds born with non-syndromic unilateral cleft lip and palate undertaken 15 years apart. CSAG children were treated in a dispersed model of care with low-volume operators. CCUK children were treated in a centralized, high volume operator system. MATERIALS AND METHODS: Oral health data were collected using a standardized proforma. Hearing was assessed using pure tone audiometry and middle ear status by otoscopy and tympanometry. ENT and hearing history were collected from medical notes and parental report. RESULTS: Oral health was assessed in 264 of 268 children (98.5%). The mean dmft was 2.3, 48% were caries free, and 44.7% had untreated caries. There was no evidence this had changed since the CSAG survey. Oral hygiene was generally good, 96% were enrolled with a dentist. Audiology was assessed in 227 of 268 children (84.7%). Forty-three per cent of children received at least one set of grommets--a 17.6% reduction compared to CSAG. Abnormal middle ear status was apparent in 50.7% of children. There was no change in hearing levels, but more children with hearing loss were managed with hearing aids. CONCLUSIONS: Outcomes for dental caries and hearing were no better in CCUK than in CSAG, although there was reduced use of grommets and increased use of hearing aids. The service specifications and recommendations should be scrutinized and implemented.
Resumo:
High-resolution melt (HRM) analysis can identify sequence polymorphisms by comparing the melting curves of amplicons generated by real-time PCR amplification. We describe the application of this technique to identify Mycobacterium avium subspecies paratuberculosis types I, II, and III. The HRM approach was based on type-specific nucleotide sequences in MAP1506, a member of the PPE (proline-proline-glutamic acid) gene family.
Resumo:
Shortages in supply of nutrients and freshwater for a growing human population are critical global issues. Traditional centralized sewage treatment can prevent eutrophication and provide sanitation, but is neither efficient nor sustainable in terms of water and resources. Source separation of household wastes, combined with decentralized resource recovery, presents a novel approach to solve these issues. Urine contains within 1 % of household waste water up to 80 % of the nitrogen (N) and 50 % of the phosphorus (P). Since microalgae are efficient at nutrient uptake, growing these organisms in urine might be a promising technology to concomitantly clean urine and produce valuable biomass containing the major plant nutrients. While state-of-the-art suspension systems for algal cultivation have mayor shortcomings in their application, immobilized cultivation on Porous Substrate Photobioreactors (PSBRs) might be a feasible alternative. The aim of this study was to develop a robust process for nutrient recovery from minimally diluted human urine using microalgae on PSBRs. The green alga Desmodesmus abundans strain CCAC 3496 was chosen for its good growth, after screening 96 algal strains derived from urine-specific isolations and culture collections. Treatment of urine, 1:1 diluted with tap water and without addition of nutrients, was performed at a light intensity of 600 μmol photons m-2 s-1 with 2.5 % CO2 and at pH 6.5. A growth rate of 7.2 g dry weight m-² day-1 and removal efficiencies for N and P of 13.1 % and 94.1 %, respectively, were determined. Pre-treatment of urine with activated carbon was found to eliminate possible detrimental effects of pharmaceuticals. These results provide a basis for further development of the technology at pilot-scale. If found to be safe in terms human and environmental health, the biomass produced from three persons could provide the P for annual production of 31 kg wheat grain and 16 kg soybean, covering the caloric demand in food for almost one month of the year for such a household. In combination with other technologies, PSBRs could thus be applied in a decentralized resource recovery system, contributing to locally close the link between sanitation and food production.
Resumo:
Network monitoring is of paramount importance for effective network management: it allows to constantly observe the network’s behavior to ensure it is working as intended and can trigger both automated and manual remediation procedures in case of failures and anomalies. The concept of SDN decouples the control logic from legacy network infrastructure to perform centralized control on multiple switches in the network, and in this context, the responsibility of switches is only to forward packets according to the flow control instructions provided by controller. However, as current SDN switches only expose simple per-port and per-flow counters, the controller has to do almost all the processing to determine the network state, which causes significant communication overhead and excessive latency for monitoring purposes. The absence of programmability in the data plane of SDN prompted the advent of programmable switches, which allow developers to customize the data-plane pipeline and implement novel programs operating directly in the switches. This means that we can offload certain monitoring tasks to programmable data planes, to perform fine-grained monitoring even at very high packet processing speeds. Given the central importance of network monitoring exploiting programmable data planes, the goal of this thesis is to enable a wide range of monitoring tasks in programmable switches, with a specific focus on the ones equipped with programmable ASICs. Indeed, most network monitoring solutions available in literature do not take computational and memory constraints of programmable switches into due account, preventing, de facto, their successful implementation in commodity switches. This claims that network monitoring tasks can be executed in programmable switches. Our evaluations show that the contributions in this thesis could be used by network administrators as well as network security engineers, to better understand the network status depending on different monitoring metrics, and thus prevent network infrastructure and service outages.
Resumo:
Persistent food insecurity and famines have continued to significantly shape the development policies of Ethiopia for decades. Over the decades, frequent famines caused not only the death of hundreds of thousands of victims but also significantly contributed to two revolutions that swept away the Haile Selassie and Derg regimes, as well as significantly taxing the legitimacy of the incumbent regime. As a result, agriculture and food security have become increasingly the top policy priorities for all political regimes in Ethiopia. However, the development policies of the ruling elites of Ethiopia have consistently failed to transform backward agriculture and ensure food security. The failures of the development policies of the Ethiopian governments over the years were attributed to several factors. Ethiopian authoritarian politics, centralized rule with a lack of transparency and accountability; the isolation of peasants from the development and governance process, and the lack of coherent agricultural development strategies that invest in peasant agriculture and create synergy among sectors are identified as key issues that have contributed to the persistence of food insecurity in the country. The literature on the failure of Ethiopia's political regimes to address food insecurity and famine has two major gaps that this study aims to fill. First, the cumulative and path-dependent food security and agricultural development policy environment were not adequately considered. Second, the strategy of extraversion by subsequent political regimes to use external support as a relief to prevent the famine-induced political crisis. This study used a mixed approach to collect data and present the evolution of the interplays of development policies and food security in three regimes within the context of international food security discourses. This study found out how the historical patterns of approaches of Ethiopia’s regimes to development and governance led to frequent famines and persistent food insecurity.
Resumo:
The fourth industrial revolution is paving the way for Industrial Internet of Things applications where industrial assets (e.g., robotic arms, valves, pistons) are equipped with a large number of wireless devices (i.e., microcontroller boards that embed sensors and actuators) to enable a plethora of new applications, such as analytics, diagnostics, monitoring, as well as supervisory, and safety control use-cases. Nevertheless, current wireless technologies, such as Wi-Fi, Bluetooth, and even private 5G networks, cannot fulfill all the requirements set up by the Industry 4.0 paradigm, thus opening up new 6G-oriented research trends, such as the use of THz frequencies. In light of the above, this thesis provides (i) a broad overview of the main use-cases, requirements, and key enabling wireless technologies foreseen by the fourth industrial revolution, and (ii) proposes innovative contributions, both theoretical and empirical, to enhance the performance of current and future wireless technologies at different levels of the protocol stack. In particular, at the physical layer, signal processing techniques are being exploited to analyze two multiplexing schemes, namely Affine Frequency Division Multiplexing and Orthogonal Chirp Division Multiplexing, which seem promising for high-frequency wireless communications. At the medium access layer, three protocols for intra-machine communications are proposed, where one is based on LoRa at 2.4 GHz and the others work in the THz band. Different scheduling algorithms for private industrial 5G networks are compared, and two main proposals are described, i.e., a decentralized scheme that leverages machine learning techniques to better address aperiodic traffic patterns, and a centralized contention-based design that serves a federated learning industrial application. Results are provided in terms of numerical evaluations, simulation results, and real-world experiments. Several improvements over the state-of-the-art were obtained, and the description of up-and-running testbeds demonstrates the feasibility of some of the theoretical concepts when considering a real industry plant.
Resumo:
The deployment of ultra-dense networks is one of the most promising solutions to manage the phenomenon of co-channel interference that affects the latest wireless communication systems, especially in hotspots. To meet the requirements of the use-cases and the immense amount of traffic generated in these scenarios, 5G ultra-dense networks are being deployed using various technologies, such as distributed antenna system (DAS) and cloud-radio access network (C-RAN). Through these centralized densification schemes, virtualized baseband processing units coordinate the distributed access points and manage the available network resources. In particular, link adaptation techniques are shown to be fundamental to overall system operation and performance enhancement. The core of this dissertation is the result of an analysis and a comparison of dynamic and adaptive methods for modulation and coding scheme (MCS) selection applied to the latest mobile telecommunications standards. A novel algorithm based on the proportional-integral-derivative (PID) controller principles and block error rate (BLER) target has been proposed. Tests were conducted in a 4G and 5G system level laboratory and, by means of a channel emulator, the performance was evaluated for different channel models and target BLERs. Furthermore, due to the intrinsic sectorization of the end-users distribution in the investigated scenario, a preliminary analysis on the joint application of users grouping algorithms with multi-antenna and multi-user techniques has been performed. In conclusion, the importance and impact of other fundamental physical layer operations, such as channel estimation and power control, on the overall end-to-end system behavior and performance were highlighted.
Resumo:
Le applicazioni che offrono servizi sulla base della posizione degli utenti sono sempre più utilizzate, a partire dal navigatore fino ad arrivare ai sistemi di trasporto intelligenti (ITS) i quali permetteranno ai veicoli di comunicare tra loro. Alcune di questi servizi permettono perfino di ottenere qualche incentivo se l'utente visita o passa per determinate zone. Per esempio un negozio potrebbe offrire dei coupon alle persone che si trovano nei paraggi. Tuttavia, la posizione degli utenti è facilmente falsificabile, ed in quest'ultima tipologia di servizi, essi potrebbero ottenere gli incentivi in modo illecito, raggirando il sistema. Diviene quindi necessario implementare un'architettura in grado di impedire alle persone di falsificare la loro posizione. A tal fine, numerosi lavori sono stati proposti, i quali delegherebbero la realizzazione di "prove di luogo" a dei server centralizzati oppure collocherebbero degli access point in grado di rilasciare prove o certificati a quegli utenti che si trovano vicino. In questo lavoro di tesi abbiamo ideato un'architettura diversa da quelle dei lavori correlati, cercando di utilizzare le funzionalità offerte dalla tecnologia blockchain e dalla memorizzazione distribuita. In questo modo è stato possibile progettare una soluzione che fosse decentralizzata e trasparente, assicurando l'immutabilità dei dati mediante l'utilizzo della blockchain. Inoltre, verrà dettagliato un'idea di caso d'uso da realizzare utilizzando l'architettura da noi proposta, andando ad evidenziare i vantaggi che, potenzialmente, si potrebbero trarre da essa. Infine, abbiamo implementato parte del sistema in questione, misurando i tempi ed i costi richiesti dalle transazioni su alcune delle blockchain disponibili al giorno d'oggi, utilizzando le infrastrutture messe a disposizione da Ethereum, Polygon e Algorand.