879 resultados para Rank and file unionism


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we characterized the social hierarchy of non-reproductive individuals of Cichlasoma dimerus (Heckel, 1840). independently for both sexes, and its relationship to the opportunity for social status ascent. Female and male individuals who were located on the top rank of the social hierarchy, ascended in social status when the opportunity arose, therefore indicating that dominance is directly correlated with social ascent likelihood. Dominance was positively correlated with size in males but not in females, suggesting for the latter a relationship with intrinsic features such as aggressiveness or personality rather than to body and/or ovarian size. Physiological and morphometrical variables related to reproduction, stress and body color were measured in non-reproductive fish and correlated with dominance and social ascent likelihood. Dominance was negatively correlated with plasma cortisol levels for both sexes. No correlation with dominance was found for androgen plasma levels (testosterone and 11-ketotestosterone). No correlation was detected between dominance and the selected morphological and physiological variables measured in females, suggesting no reproductive inhibition in this sex at a physiological level and that all females seem to be ready for reproduction. In contrast, social hierarchy of non-reproductive males was found to be positively correlated with follicle stimulating hormone (FSH) pituitary content levels and gonadosomatic indexes. This suggests an adaptive mechanism of non reproductive males, adjusting their reproductive investment in relation to their likelihood for social status ascent, as perceived by their position in the social hierarchy. This likelihood is translated into a physiological signal through plasma cortisol levels that inhibit gonad investment through pituitary inhibition of FSH, representing an anticipatory response to the opportunity for social status ascent. (C) 2012 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bone remodeling is affected by mechanical loading and inflammatory mediators, including chemokines. The chemokine (C–C motif) ligand 3 (CCL3) is involved in bone remodeling by binding to C–C chemokine receptors 1 and 5 (CCR1 and CCR5) expressed on osteoclasts and osteoblasts. Our group has previously demonstrated that CCR5 down-regulates mechanical loading-induced bone resorption. Thus, the present study aimed to investigate the role of CCR1 and CCL3 in bone remodeling induced by mechanical loading during orthodontic tooth movement in mice. Our results showed that bone remodeling was significantly decreased in CCL3−/− and CCR1−/− mice and in animals treated with Met-RANTES (an antagonist of CCR5 and CCR1). mRNA levels of receptor activator of nuclear factor kappa-B (RANK), its ligand RANKL, tumor necrosis factor alpha (TNF-α) and RANKL/osteoprotegerin (OPG) ratio were diminished in the periodontium of CCL3−/− mice and in the group treated with Met-RANTES. Met-RANTES treatment also reduced the levels of cathepsin K and metalloproteinase 13 (MMP13). The expression of the osteoblast markers runt-related transcription factor 2 (RUNX2) and periostin was decreased, while osteocalcin (OCN) was augmented in CCL3−/− and Met-RANTES-treated mice. Altogether, these findings show that CCR1 is pivotal for bone remodeling induced by mechanical loading during orthodontic tooth movement and these actions depend, at least in part, on CCL3.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background UCP2 (uncoupling protein 2) plays an important role in cardiovascular diseases and recent studies have suggested that the A55V polymorphism can cause UCP2 dysfunction. The main aim was to investigate the association of A55V polymorphism with cardiovascular events in a group of 611 patients enrolled in the Medical, Angioplasty or Surgery Study II (MASS II), a randomized trial comparing treatments for patients with coronary artery disease and preserved left ventricular function. Methods The participants of the MASS II were genotyped for the A55V polymorphism using allele-specific PCR assay. Survival curves were calculated with the Kaplan–Meier method and evaluated with the log-rank statistic. The relationship between baseline variables and the composite end-point of cardiac death, acute myocardial infarction (AMI), refractory angina requiring revascularization and cerebrovascular accident were assessed using a Cox proportional hazards survival model. Results There were no significant differences for baseline variables according genotypes. After 2 years of follow-up, dysglycemic patients harboring the VV genotype had higher occurrence of AMI (p=0.026), Death+AMI (p=0.033), new revascularization intervention (p=0.009) and combined events (p=0.037) as compared with patients carrying other genotypes. This association was not evident in normoglycemic patients. Conclusions These findings support the hypothesis that A55V polymorphism is associated with UCP2 functional alterations that increase the risk of cardiovascular events in patients with previous coronary artery disease and dysglycemia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to evaluate in vivo the clinical applicability of two electronic apex locators (EALs) - Apex (Septodont) and iPex (NSK) - in different groups of human teeth by using radiography. The working lengths (WLs) of 100 root canals were determined electronically. The EAL to be used first was chosen randomly and a K-file was inserted into the root canal until the EAL display indicated the location of the apical constriction (0 mm). The K-file was fixed to the tooth and a periapical radiograph was taken using a radiographic film holder. The K-file was removed and the WL was measured. The same procedure was repeated using the other EAL. Radiographs were examined with the aid of a light-box with lens of ×4 magnification by two blinded experienced endodontists. The distance between the file tip and the root apex was recorded as follows: (A) +1 to 0 mm, (B) -0.1 to 0.5 mm, (C) -0.6 to 1 mm, (D) -1.1 to 1.5 mm, and (E) -1.6 mm or greater. For statistical purposes, these scores were divided into 2 subgroups according to the radiographic apex: acceptable (B, C, and D) and non-acceptable (A and E). Statistically significant differences were not found between the results of Apex and iPex in terms of acceptable and non-acceptable measurements (p>0.05) or in terms of the distance recorded from file tip and the radiographic apex (p>0.05). Apex and iPex EALs provided reliable measurements for WL determination for endodontic therapy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: The aim of this study was to investigate the influence of cervical preflaring in determining the initial apical file (IAF) in the palatal roots of maxillary molars, and to determine the morphologic shape of the canal 1 mm short of the apex. METHODS: After preparing standard access cavities the group 1 received the IAF without cervical preflaring (WCP). In groups 2 to 5, preflaring was performed with Gates-Glidden (GG), Anatomic Endodontics Technology (AET), GT Rotary Files (GT) and LA Axxes (LA), respectively. Each canal was sized using manual K-files, starting with size 08 files, and making passive movements until the WL was reached. File sizes were increased until a binding sensation was felt at the WL. The IAF area and the area of the root canal were measured to verify the percentage occupied by the IAF inside the canal in each sample by SEM. The morphologic shape of the root canal was classified as circular, oval or flattened. Statistical analysis was performed by ANOVA/Tukey test (P < 0.05). RESULTS: The decreasing percentages occupied by the IAF inside the canal were: LA>GT=AET>GG>WCP. The morphologic shape was predominantly oval. CONCLUSION: The type of cervical preflaring used interferes in the determination of IAF.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a sample of planetary nebulae in the Galaxy's inner-disk and bulge is used to find the galactocentric distance that optimally separates these two populations in terms of their abundances. Statistical distance scales were used to investigate the distribution of abundances across the disk–bulge interface, while a Kolmogorov–Smirnov test was used to find the distance at which the chemical properties of these regions separate optimally. The statistical analysis indicates that, on average, the inner population is characterized by lower abundances than the outer component. Additionally, for the α-element abundances, the inner population does not follow the disk's radial gradient toward the Galactic Center. Based on our results, we suggest a bulge–disk interface at 1.5 kpc, marking the transition between the bulge and the inner disk of the Galaxy as defined by the intermediate-mass population.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background Regardless the regulatory function of microRNAs (miRNA), their differential expression pattern has been used to define miRNA signatures and to disclose disease biomarkers. To address the question of whether patients presenting the different types of diabetes mellitus could be distinguished on the basis of their miRNA and mRNA expression profiling, we obtained peripheral blood mononuclear cell (PBMC) RNAs from 7 type 1 (T1D), 7 type 2 (T2D), and 6 gestational diabetes (GDM) patients, which were hybridized to Agilent miRNA and mRNA microarrays. Data quantification and quality control were obtained using the Feature Extraction software, and data distribution was normalized using quantile function implemented in the Aroma light package. Differentially expressed miRNAs/mRNAs were identified using Rank products, comparing T1DxGDM, T2DxGDM and T1DxT2D. Hierarchical clustering was performed using the average linkage criterion with Pearson uncentered distance as metrics. Results The use of the same microarrays platform permitted the identification of sets of shared or specific miRNAs/mRNA interaction for each type of diabetes. Nine miRNAs (hsa-miR-126, hsa-miR-1307, hsa-miR-142-3p, hsa-miR-142-5p, hsa-miR-144, hsa-miR-199a-5p, hsa-miR-27a, hsa-miR-29b, and hsa-miR-342-3p) were shared among T1D, T2D and GDM, and additional specific miRNAs were identified for T1D (20 miRNAs), T2D (14) and GDM (19) patients. ROC curves allowed the identification of specific and relevant (greater AUC values) miRNAs for each type of diabetes, including: i) hsa-miR-1274a, hsa-miR-1274b and hsa-let-7f for T1D; ii) hsa-miR-222, hsa-miR-30e and hsa-miR-140-3p for T2D, and iii) hsa-miR-181a and hsa-miR-1268 for GDM. Many of these miRNAs targeted mRNAs associated with diabetes pathogenesis. Conclusions These results indicate that PBMC can be used as reporter cells to characterize the miRNA expression profiling disclosed by the different diabetes mellitus manifestations. Shared miRNAs may characterize diabetes as a metabolic and inflammatory disorder, whereas specific miRNAs may represent biological markers for each type of diabetes, deserving further attention.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Study IReal Wage Determination in the Swedish Engineering Industry This study uses the monopoly union model to examine the determination of real wages and in particular the effects of active labour market programmes (ALMPs) on real wages in the engineering industry. Quarterly data for the period 1970:1 to 1996:4 are used in a cointegration framework, utilising the Johansen's maximum likelihood procedure. On a basis of the Johansen (trace) test results, vector error correction (VEC) models are created in order to model the determination of real wages in the engineering industry. The estimation results support the presence of a long-run wage-raising effect to rises in the labour productivity, in the tax wedge, in the alternative real consumer wage and in real UI benefits. The estimation results also support the presence of a long-run wage-raising effect due to positive changes in the participation rates regarding ALMPs, relief jobs and labour market training. This could be interpreted as meaning that the possibility of being a participant in an ALMP increases the utility for workers of not being employed in the industry, which in turn could increase real wages in the industry in the long run. Finally, the estimation results show evidence of a long-run wage-reducing effect due to positive changes in the unemployment rate. Study IIIntersectoral Wage Linkages in Sweden The purpose of this study is to investigate whether the wage-setting in certain sectors of the Swedish economy affects the wage-setting in other sectors. The theoretical background is the Scandinavian model of inflation, which states that the wage-setting in the sectors exposed to international competition affects the wage-setting in the sheltered sectors of the economy. The Johansen maximum likelihood cointegration approach is applied to quarterly data on Swedish sector wages for the period 1980:1–2002:2. Different vector error correction (VEC) models are created, based on assumptions as to which sectors are exposed to international competition and which are not. The adaptability of wages between sectors is then tested by imposing restrictions on the estimated VEC models. Finally, Granger causality tests are performed in the different restricted/unrestricted VEC models to test for sector wage leadership. The empirical results indicate considerable adaptability in wages as between manufacturing, construction, the wholesale and retail trade, the central government sector and the municipalities and county councils sector. This is consistent with the assumptions of the Scandinavian model. Further, the empirical results indicate a low level of adaptability in wages as between the financial sector and manufacturing, and between the financial sector and the two public sectors. The Granger causality tests provide strong evidence for the presence of intersectoral wage causality, but no evidence of a wage-leading role in line with the assumptions of the Scandinavian model for any of the sectors. Study IIIWage and Price Determination in the Private Sector in Sweden The purpose of this study is to analyse wage and price determination in the private sector in Sweden during the period 1980–2003. The theoretical background is a variant of the “Imperfect competition model of inflation”, which assumes imperfect competition in the labour and product markets. According to the model wages and prices are determined as a result of a “battle of mark-ups” between trade unions and firms. The Johansen maximum likelihood cointegration approach is applied to quarterly Swedish data on consumer prices, import prices, private-sector nominal wages, private-sector labour productivity and the total unemployment rate for the period 1980:1–2003:3. The chosen cointegration rank of the estimated vector error correction (VEC) model is two. Thus, two cointegration relations are assumed: one for private-sector nominal wage determination and one for consumer price determination. The estimation results indicate that an increase of consumer prices by one per cent lifts private-sector nominal wages by 0.8 per cent. Furthermore, an increase of private-sector nominal wages by one per cent increases consumer prices by one per cent. An increase of one percentage point in the total unemployment rate reduces private-sector nominal wages by about 4.5 per cent. The long-run effects of private-sector labour productivity and import prices on consumer prices are about –1.2 and 0.3 per cent, respectively. The Rehnberg agreement during 1991–92 and the monetary policy shift in 1993 affected the determination of private-sector nominal wages, private-sector labour productivity, import prices and the total unemployment rate. The “offensive” devaluation of the Swedish krona by 16 per cent in 1982:4, and the start of a floating Swedish krona and the substantial depreciation of the krona at this time affected the determination of import prices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gastroesophageal junction (GEJ) adenocarcinoma are uncommon before age of 40 years. While certain clinical, pathological and molecular features of GEJ adenocarcinoma in older patients have been extensively studied, these characteristics in the younger population remain to be determined. In the recent literature, a high sensitivity and specificity for the detection of dysplasia and esophageal adenocarcinoma was demonstrated by using multicolor fluorescence in situ hybridization (FISH) DNA probe set specific for the locus specific regions 9p21 (p16), 20q13.2 and Y chromosome. We evaluated 663 patients with GEJ adenocarcinoma and further divided them into 2 age-groups of and >or= 50 years, rispectively. FISH with selected DNA probe for Y chromosome, locus 9p21 (p16), and locus 20q13.2 was investigated with formalin fixed and parassin embedded tissue from surgical resections of 17 younger and 11 older patients. Signals were counted in > 100 cells with each given histopathological category. The chromosomal aberrations were then compared in the 2 age-groups with the focus on uninvolved squamous and columnar epithelium, intestinal metaplasia (Barrett's mucosa), glandular dysplasia, and adenocarcinoma. Comparisons were performed by the X2 test, Fisher's exact test, Student's t-test and Mann-Whitney U-test as appropriate. Survival was estimated by the Kaplan-Meier method with univariate analysis by the log-rank. Significance was taken at the 5% level. There was no difference in the surgical technique applied in both age groups and most patients underwent Ivor Lewis esophagectomy. Among clinical variables there was a higher incidence of smocking history in older patient group. We identified a progressive loss of Y chromosome from benign squamos epithelium to Barrett's mucosa and glandular dysplasia, and, ultimately, to a near complete loss in adenocarcinoma in both age groups. The young group revealed significantly more losses of 9p21 in both benign and neoplastic cells when compared to the older patients group. In addition, we demonstrated an increase in the percentage of cells showing gain of locus 20q13.2 with progression from benign epithelium through dysplasia to adenocarcinoma with almost the same trend in both the young and the older patients. When compared with the older age-group, younger patients with GEJ adenocarcinoma possess similar known demographics, environmental factors, clinical, and pathologic characteristics. The most commonly detected genetic aberrations of progressive Y chromosomal loss, 9p21 locus loss, and 20q13 gains were similar in the younger and older patients. However the rate of loss of 9p21 is significantly higher in young patients, in both the benign and the neoplastic cells. The loss of 9p21, and possibly, the subsequent inactivation of p16 gene may be one of the molecular mechanisms responsible for the accelerated neoplastic process in young patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is not unknown that the evolution of firm theories has been developed along a path paved by an increasing awareness of the organizational structure importance. From the early “neoclassical” conceptualizations that intended the firm as a rational actor whose aim is to produce that amount of output, given the inputs at its disposal and in accordance to technological or environmental constraints, which maximizes the revenue (see Boulding, 1942 for a past mid century state of the art discussion) to the knowledge based theory of the firm (Nonaka & Takeuchi, 1995; Nonaka & Toyama, 2005), which recognizes in the firm a knnowledge creating entity, with specific organizational capabilities (Teece, 1996; Teece & Pisano, 1998) that allow to sustaine competitive advantages. Tracing back a map of the theory of the firm evolution, taking into account the several perspectives adopted in the history of thought, would take the length of many books. Because of that a more fruitful strategy is circumscribing the focus of the description of the literature evolution to one flow connected to a crucial question about the nature of firm’s behaviour and about the determinants of competitive advantages. In so doing I adopt a perspective that allows me to consider the organizational structure of the firm as an element according to which the different theories can be discriminated. The approach adopted starts by considering the drawbacks of the standard neoclassical theory of the firm. Discussing the most influential theoretical approaches I end up with a close examination of the knowledge based perspective of the firm. Within this perspective the firm is considered as a knowledge creating entity that produce and mange knowledge (Nonaka, Toyama, & Nagata, 2000; Nonaka & Toyama, 2005). In a knowledge intensive organization, knowledge is clearly embedded for the most part in the human capital of the individuals that compose such an organization. In a knowledge based organization, the management, in order to cope with knowledge intensive productions, ought to develop and accumulate capabilities that shape the organizational forms in a way that relies on “cross-functional processes, extensive delayering and empowerment” (Foss 2005, p.12). This mechanism contributes to determine the absorptive capacity of the firm towards specific technologies and, in so doing, it also shape the technological trajectories along which the firm moves. After having recognized the growing importance of the firm’s organizational structure in the theoretical literature concerning the firm theory, the subsequent point of the analysis is that of providing an overview of the changes that have been occurred at micro level to the firm’s organization of production. The economic actors have to deal with challenges posed by processes of internationalisation and globalization, increased and increasing competitive pressure of less developed countries on low value added production activities, changes in technologies and increased environmental turbulence and volatility. As a consequence, it has been widely recognized that the main organizational models of production that fitted well in the 20th century are now partially inadequate and processes aiming to reorganize production activities have been widespread across several economies in recent years. Recently, the emergence of a “new” form of production organization has been proposed both by scholars, practitioners and institutions: the most prominent characteristic of such a model is its recognition of the importance of employees commitment and involvement. As a consequence it is characterized by a strong accent on the human resource management and on those practices that aim to widen the autonomy and responsibility of the workers as well as increasing their commitment to the organization (Osterman, 1994; 2000; Lynch, 2007). This “model” of production organization is by many defined as High Performance Work System (HPWS). Despite the increasing diffusion of workplace practices that may be inscribed within the concept of HPWS in western countries’ companies, it is an hazard, to some extent, to speak about the emergence of a “new organizational paradigm”. The discussion about organizational changes and the diffusion of HPWP the focus cannot abstract from a discussion about the industrial relations systems, with a particular accent on the employment relationships, because of their relevance, in the same way as production organization, in determining two major outcomes of the firm: innovation and economic performances. The argument is treated starting from the issue of the Social Dialogue at macro level, both in an European perspective and Italian perspective. The model of interaction between the social parties has repercussions, at micro level, on the employment relationships, that is to say on the relations between union delegates and management or workers and management. Finding economic and social policies capable of sustaining growth and employment within a knowledge based scenario is likely to constitute the major challenge for the next generation of social pacts, which are the main social dialogue outcomes. As Acocella and Leoni (2007) put forward the social pacts may constitute an instrument to trade wage moderation for high intensity in ICT, organizational and human capital investments. Empirical evidence, especially focused on the micro level, about the positive relation between economic growth and new organizational designs coupled with ICT adoption and non adversarial industrial relations is growing. Partnership among social parties may become an instrument to enhance firm competitiveness. The outcome of the discussion is the integration of organizational changes and industrial relations elements within a unified framework: the HPWS. Such a choice may help in disentangling the potential existence of complementarities between these two aspects of the firm internal structure on economic and innovative performance. With the third chapter starts the more original part of the thesis. The data utilized in order to disentangle the relations between HPWS practices, innovation and economic performance refer to the manufacturing firms of the Reggio Emilia province with more than 50 employees. The data have been collected through face to face interviews both to management (199 respondents) and to union representatives (181 respondents). Coupled with the cross section datasets a further data source is constituted by longitudinal balance sheets (1994-2004). Collecting reliable data that in turn provide reliable results needs always a great effort to which are connected uncertain results. Data at micro level are often subjected to a trade off: the wider is the geographical context to which the population surveyed belong the lesser is the amount of information usually collected (low level of resolution); the narrower is the focus on specific geographical context, the higher is the amount of information usually collected (high level of resolution). For the Italian case the evidence about the diffusion of HPWP and their effects on firm performances is still scanty and usually limited to local level studies (Cristini, et al., 2003). The thesis is also devoted to the deepening of an argument of particular interest: the existence of complementarities between the HPWS practices. It has been widely shown by empirical evidence that when HPWP are adopted in bundles they are more likely to impact on firm’s performances than when adopted in isolation (Ichniowski, Prennushi, Shaw, 1997). Is it true also for the local production system of Reggio Emilia? The empirical analysis has the precise aim of providing evidence on the relations between the HPWS dimensions and the innovative and economic performances of the firm. As far as the first line of analysis is concerned it must to be stressed the fundamental role that innovation plays in the economy (Geroski & Machin, 1993; Stoneman & Kwoon 1994, 1996; OECD, 2005; EC, 2002). On this point the evidence goes from the traditional innovations, usually approximated by R&D investment expenditure or number of patents, to the introduction and adoption of ICT, in the recent years (Brynjolfsson & Hitt, 2000). If innovation is important then it is critical to analyse its determinants. In this work it is hypothesised that organizational changes and firm level industrial relations/employment relations aspects that can be put under the heading of HPWS, influence the propensity to innovate in product, process and quality of the firm. The general argument may goes as follow: changes in production management and work organization reconfigure the absorptive capacity of the firm towards specific technologies and, in so doing, they shape the technological trajectories along which the firm moves; cooperative industrial relations may lead to smother adoption of innovations, because not contrasted by unions. From the first empirical chapter emerges that the different types of innovations seem to respond in different ways to the HPWS variables. The underlying processes of product, process and quality innovations are likely to answer to different firm’s strategies and needs. Nevertheless, it is possible to extract some general results in terms of the most influencing HPWS factors on innovative performance. The main three aspects are training coverage, employees involvement and the diffusion of bonuses. These variables show persistent and significant relations with all the three innovation types. The same do the components having such variables at their inside. In sum the aspects of the HPWS influence the propensity to innovate of the firm. At the same time, emerges a quite neat (although not always strong) evidence of complementarities presence between HPWS practices. In terns of the complementarity issue it can be said that some specific complementarities exist. Training activities, when adopted and managed in bundles, are related to the propensity to innovate. Having a sound skill base may be an element that enhances the firm’s capacity to innovate. It may enhance both the capacity to absorbe exogenous innovation and the capacity to endogenously develop innovations. The presence and diffusion of bonuses and the employees involvement also spur innovative propensity. The former because of their incentive nature and the latter because direct workers participation may increase workers commitment to the organizationa and thus their willingness to support and suggest inovations. The other line of analysis provides results on the relation between HPWS and economic performances of the firm. There have been a bulk of international empirical studies on the relation between organizational changes and economic performance (Black & Lynch 2001; Zwick 2004; Janod & Saint-Martin 2004; Huselid 1995; Huselid & Becker 1996; Cappelli & Neumark 2001), while the works aiming to capture the relations between economic performance and unions or industrial relations aspects are quite scant (Addison & Belfield, 2001; Pencavel, 2003; Machin & Stewart, 1990; Addison, 2005). In the empirical analysis the integration of the two main areas of the HPWS represent a scarcely exploited approach in the panorama of both national and international empirical studies. As remarked by Addison “although most analysis of workers representation and employee involvement/high performance work practices have been conducted in isolation – while sometimes including the other as controls – research is beginning to consider their interactions” (Addison, 2005, p.407). The analysis conducted exploiting temporal lags between dependent and covariates, possibility given by the merger of cross section and panel data, provides evidence in favour of the existence of HPWS practices impact on firm’s economic performance, differently measured. Although it does not seem to emerge robust evidence on the existence of complementarities among HPWS aspects on performances there is evidence of a general positive influence of the single practices. The results are quite sensible to the time lags, inducing to hypothesize that time varying heterogeneity is an important factor in determining the impact of organizational changes on economic performance. The implications of the analysis can be of help both to management and local level policy makers. Although the results are not simply extendible to other local production systems it may be argued that for contexts similar to the Reggio Emilia province, characterized by the presence of small and medium enterprises organized in districts and by a deep rooted unionism, with strong supporting institutions, the results and the implications here obtained can also fit well. However, a hope for future researches on the subject treated in the present work is that of collecting good quality information over wider geographical areas, possibly at national level, and repeated in time. Only in this way it is possible to solve the Gordian knot about the linkages between innovation, performance, high performance work practices and industrial relations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mixed integer programming is up today one of the most widely used techniques for dealing with hard optimization problems. On the one side, many practical optimization problems arising from real-world applications (such as, e.g., scheduling, project planning, transportation, telecommunications, economics and finance, timetabling, etc) can be easily and effectively formulated as Mixed Integer linear Programs (MIPs). On the other hand, 50 and more years of intensive research has dramatically improved on the capability of the current generation of MIP solvers to tackle hard problems in practice. However, many questions are still open and not fully understood, and the mixed integer programming community is still more than active in trying to answer some of these questions. As a consequence, a huge number of papers are continuously developed and new intriguing questions arise every year. When dealing with MIPs, we have to distinguish between two different scenarios. The first one happens when we are asked to handle a general MIP and we cannot assume any special structure for the given problem. In this case, a Linear Programming (LP) relaxation and some integrality requirements are all we have for tackling the problem, and we are ``forced" to use some general purpose techniques. The second one happens when mixed integer programming is used to address a somehow structured problem. In this context, polyhedral analysis and other theoretical and practical considerations are typically exploited to devise some special purpose techniques. This thesis tries to give some insights in both the above mentioned situations. The first part of the work is focused on general purpose cutting planes, which are probably the key ingredient behind the success of the current generation of MIP solvers. Chapter 1 presents a quick overview of the main ingredients of a branch-and-cut algorithm, while Chapter 2 recalls some results from the literature in the context of disjunctive cuts and their connections with Gomory mixed integer cuts. Chapter 3 presents a theoretical and computational investigation of disjunctive cuts. In particular, we analyze the connections between different normalization conditions (i.e., conditions to truncate the cone associated with disjunctive cutting planes) and other crucial aspects as cut rank, cut density and cut strength. We give a theoretical characterization of weak rays of the disjunctive cone that lead to dominated cuts, and propose a practical method to possibly strengthen those cuts arising from such weak extremal solution. Further, we point out how redundant constraints can affect the quality of the generated disjunctive cuts, and discuss possible ways to cope with them. Finally, Chapter 4 presents some preliminary ideas in the context of multiple-row cuts. Very recently, a series of papers have brought the attention to the possibility of generating cuts using more than one row of the simplex tableau at a time. Several interesting theoretical results have been presented in this direction, often revisiting and recalling other important results discovered more than 40 years ago. However, is not clear at all how these results can be exploited in practice. As stated, the chapter is a still work-in-progress and simply presents a possible way for generating two-row cuts from the simplex tableau arising from lattice-free triangles and some preliminary computational results. The second part of the thesis is instead focused on the heuristic and exact exploitation of integer programming techniques for hard combinatorial optimization problems in the context of routing applications. Chapters 5 and 6 present an integer linear programming local search algorithm for Vehicle Routing Problems (VRPs). The overall procedure follows a general destroy-and-repair paradigm (i.e., the current solution is first randomly destroyed and then repaired in the attempt of finding a new improved solution) where a class of exponential neighborhoods are iteratively explored by heuristically solving an integer programming formulation through a general purpose MIP solver. Chapters 7 and 8 deal with exact branch-and-cut methods. Chapter 7 presents an extended formulation for the Traveling Salesman Problem with Time Windows (TSPTW), a generalization of the well known TSP where each node must be visited within a given time window. The polyhedral approaches proposed for this problem in the literature typically follow the one which has been proven to be extremely effective in the classical TSP context. Here we present an overall (quite) general idea which is based on a relaxed discretization of time windows. Such an idea leads to a stronger formulation and to stronger valid inequalities which are then separated within the classical branch-and-cut framework. Finally, Chapter 8 addresses the branch-and-cut in the context of Generalized Minimum Spanning Tree Problems (GMSTPs) (i.e., a class of NP-hard generalizations of the classical minimum spanning tree problem). In this chapter, we show how some basic ideas (and, in particular, the usage of general purpose cutting planes) can be useful to improve on branch-and-cut methods proposed in the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Eine Gruppe G hat endlichen Prüferrang (bzw. Ko-zentralrang) kleiner gleich r, wenn für jede endlich erzeugte Gruppe H gilt: H (bzw. H modulo seinem Zentrum) ist r-erzeugbar. In der vorliegenden Arbeit werden, soweit möglich, die bekannten Sätze über Gruppen von endlichem Prüferrang (kurz X-Gruppen), auf die wesentlich größere Klasse der Gruppen mit endlichem Ko-zentralrang (kurz R-Gruppen) verallgemeinert.Für lokal nilpotente R-Gruppen, welche torsionsfrei oder p-Gruppen sind, wird gezeigt, dass die Zentrumsfaktorgruppe eine X-Gruppe sein muss. Es folgt, dass Hyperzentralität und lokale Nilpotenz für R-Gruppen identische Bediungungen sind. Analog hierzu sind R-Gruppen genau dann lokal auflösbar, wenn sie hyperabelsch sind. Zentral für die Strukturtheorie hyperabelscher R-Gruppen ist die Tatsache, dass solche Gruppen eine aufsteigende Normalreihe abelscher X-Gruppen besitzen. Es wird eine Sylowtheorie für periodische hyperabelsche R-Gruppen entwickelt. Für torsionsfreie hyperabelsche R-Gruppen wird deren Auflösbarkeit bewiesen. Des weiteren sind lokal endliche R-Gruppen fast hyperabelsch. Für R-Gruppen fallen sehr große Gruppenklassen mit den fast hyperabelschen Gruppen zusammen. Hierzu wird der Begriff der Sektionsüberdeckung eingeführt und gezeigt, dass R-Gruppen mit fast hyperabelscher Sektionsüberdeckung fast hyperabelsch sind.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Different tools have been used to set up and adopt the model for the fulfillment of the objective of this research. 1. The Model The base model that has been used is the Analytical Hierarchy Process (AHP) adapted with the aim to perform a Benefit Cost Analysis. The AHP developed by Thomas Saaty is a multicriteria decision - making technique which decomposes a complex problem into a hierarchy. It is used to derive ratio scales from both discreet and continuous paired comparisons in multilevel hierarchic structures. These comparisons may be taken from actual measurements or from a fundamental scale that reflects the relative strength of preferences and feelings. 2. Tools and methods 2.1. The Expert Choice Software The software Expert Choice is a tool that allows each operator to easily implement the AHP model in every stage of the problem. 2.2. Personal Interviews to the farms For this research, the farms of the region Emilia Romagna certified EMAS have been detected. Information has been given by EMAS center in Wien. Personal interviews have been carried out to each farm in order to have a complete and realistic judgment of each criteria of the hierarchy. 2.3. Questionnaire A supporting questionnaire has also been delivered and used for the interviews . 3. Elaboration of the data After data collection, the data elaboration has taken place. The software support Expert Choice has been used . 4. Results of the Analysis The result of the figures above (vedere altro documento) gives a series of numbers which are fractions of the unit. This has to be interpreted as the relative contribution of each element to the fulfillment of the relative objective. So calculating the Benefits/costs ratio for each alternative the following will be obtained: Alternative One: Implement EMAS Benefits ratio: 0, 877 Costs ratio: 0, 815 Benfit/Cost ratio: 0,877/0,815=1,08 Alternative Two: Not Implement EMAS Benefits ratio: 0,123 Costs ration: 0,185 Benefit/Cost ratio: 0,123/0,185=0,66 As stated above, the alternative with the highest ratio will be the best solution for the organization. This means that the research carried out and the model implemented suggests that EMAS adoption in the agricultural sector is the best alternative. It has to be noted that the ratio is 1,08 which is a relatively low positive value. This shows the fragility of this conclusion and suggests a careful exam of the benefits and costs for each farm before adopting the scheme. On the other part, the result needs to be taken in consideration by the policy makers in order to enhance their intervention regarding the scheme adoption on the agricultural sector. According to the AHP elaboration of judgments we have the following main considerations on Benefits: - Legal compliance seems to be the most important benefit for the agricultural sector since its rank is 0,471 - The next two most important benefits are Improved internal organization (ranking 0,230) followed by Competitive advantage (ranking 0, 221) mostly due to the sub-element Improved image (ranking 0,743) Finally, even though Incentives are not ranked among the most important elements, the financial ones seem to have been decisive on the decision making process. According to the AHP elaboration of judgments we have the following main considerations on Costs: - External costs seem to be largely more important than the internal ones (ranking 0, 857 over 0,143) suggesting that Emas costs over consultancy and verification remain the biggest obstacle. - The implementation of the EMS is the most challenging element regarding the internal costs (ranking 0,750).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La tesi ha lo scopo di esplorare la produzione di sistemi software per Embedded Systems mediante l'utilizzo di tecniche relative al mondo del Model Driven Software Development. La fase più importante dello sviluppo sarà la definizione di un Meta-Modello che caratterizza i concetti fondamentali relativi agli embedded systems. Tale modello cercherà di astrarre dalla particolare piattaforma utilizzata ed individuare quali astrazioni caratterizzano il mondo degli embedded systems in generale. Tale meta-modello sarà quindi di tipo platform-independent. Per la generazione automatica di codice è stata adottata una piattaforma di riferimento, cioè Arduino. Arduino è un sistema embedded che si sta sempre più affermando perché coniuga un buon livello di performance ed un prezzo relativamente basso. Tale piattaforma permette lo sviluppo di sistemi special purpose che utilizzano sensori ed attuatori di vario genere, facilmente connessi ai pin messi a disposizione. Il meta-modello definito è un'istanza del meta-metamodello MOF, definito formalmente dall'organizzazione OMG. Questo permette allo sviluppatore di pensare ad un sistema sotto forma di modello, istanza del meta-modello definito. Un meta-modello può essere considerato anche come la sintassi astratta di un linguaggio, quindi può essere definito da un insieme di regole EBNF. La tecnologia utilizzata per la definizione del meta-modello è stata Xtext: un framework che permette la scrittura di regole EBNF e che genera automaticamente il modello Ecore associato al meta-modello definito. Ecore è l'implementazione di EMOF in ambiente Eclipse. Xtext genera inoltre dei plugin che permettono di avere un editor guidato dalla sintassi, definita nel meta-modello. La generazione automatica di codice è stata realizzata usando il linguaggio Xtend2. Tale linguaggio permette di esplorare l'Abstract Syntax Tree generato dalla traduzione del modello in Ecore e di generare tutti i file di codice necessari. Il codice generato fornisce praticamente tutta la schematic part dell'applicazione, mentre lascia all'application designer lo sviluppo della business logic. Dopo la definizione del meta-modello di un sistema embedded, il livello di astrazione è stato spostato più in alto, andando verso la definizione della parte di meta-modello relativa all'interazione di un sistema embedded con altri sistemi. Ci si è quindi spostati verso un ottica di Sistema, inteso come insieme di sistemi concentrati che interagiscono. Tale difinizione viene fatta dal punto di vista del sistema concentrato di cui si sta definendo il modello. Nella tesi viene inoltre introdotto un caso di studio che, anche se abbastanza semplice, fornisce un esempio ed un tutorial allo sviluppo di applicazioni mediante l'uso del meta-modello. Ci permette inoltre di notare come il compito dell'application designer diventi piuttosto semplice ed immediato, sempre se basato su una buona analisi del problema. I risultati ottenuti sono stati di buona qualità ed il meta-modello viene tradotto in codice che funziona correttamente.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis we have developed solutions to common issues regarding widefield microscopes, facing the problem of the intensity inhomogeneity of an image and dealing with two strong limitations: the impossibility of acquiring either high detailed images representative of whole samples or deep 3D objects. First, we cope with the problem of the non-uniform distribution of the light signal inside a single image, named vignetting. In particular we proposed, for both light and fluorescent microscopy, non-parametric multi-image based methods, where the vignetting function is estimated directly from the sample without requiring any prior information. After getting flat-field corrected images, we studied how to fix the problem related to the limitation of the field of view of the camera, so to be able to acquire large areas at high magnification. To this purpose, we developed mosaicing techniques capable to work on-line. Starting from a set of overlapping images manually acquired, we validated a fast registration approach to accurately stitch together the images. Finally, we worked to virtually extend the field of view of the camera in the third dimension, with the purpose of reconstructing a single image completely in focus, stemming from objects having a relevant depth or being displaced in different focus planes. After studying the existing approaches for extending the depth of focus of the microscope, we proposed a general method that does not require any prior information. In order to compare the outcome of existing methods, different standard metrics are commonly used in literature. However, no metric is available to compare different methods in real cases. First, we validated a metric able to rank the methods as the Universal Quality Index does, but without needing any reference ground truth. Second, we proved that the approach we developed performs better in both synthetic and real cases.