859 resultados para Business enterprises -- Electronic data processing -- Study and teaching (Higher) -- Chile


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Proneuropeptide Y (ProNPY) undergoes cleavage at a single dibasic site Lys38-Arg39 resulting in the formation of 1-39 amino acid NPY which is further processed successively by carboxypeptidase-like and peptidylglycine alpha-amidating monooxygenase enzymes. To investigate whether prohormone convertases are involved in ProNPY processing, a vaccinia virus derived expression system was used to coexpress recombinant ProNPY with each of the prohormone convertases PC1/3, PC2, furin, and PACE4 in Neuro2A and NIH 3T3 cell lines as regulated neuroendocrine and constitutive prototype cell lines, respectively. The analysis of processed products shows that only PC1/3 generates NPY in NIH 3T3 cells while both PC1/3 and PC2 are able to generate NPY in Neuro2A cells. The convertases furin and PACE4 are unable to process ProNPY in either cell line. Moreover, comparative in vitro cleavage of recombinant NPY precursor by the enzymes PC1/3, PC2 and furin shows that only PC1/3 and PC2 are involved in specific cleavage of the dibasic site. Kinetic studies demonstrate that PC1/3 cleaves ProNPY more efficiently than PC2. The main difference between the cleavage efficiency is observed in the Vmax values whereas no major difference is observed in Km values. In addition the cleavage by PC1/3 and PC2 of two peptides reproducing the dibasic cleavage site with different amino acid sequence lengths namely (20-49)-ProNPY and (28-43)-ProNPY was studied. These shortened ProNPY substrates, when recognized by the enzymes, are more efficiently cleaved than ProNPY itself. The shortest peptide is not cleaved by PC2 while it is by PC1/3. On the basis of these observations it is proposed, first, that the constitutive secreted NPY does not result from the cleavage carried out by ubiquitously expressed enzymes furin and PACE4; second, that PC1/3 and PC2 are not equipotent in the cleavage of ProNPY; and third, substrate peptide length might discriminate PC1/3 and PC2 processing activity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

METHODS: We examined 20 patients from 2 unrelated Swiss families to describe their clinical phenotype. In addition, a linkage analysis was performed in an attempt to confirm the reported genetic homogeneity of this condition as well as to refine its genomic localization. RESULTS: Two point analysis provided a cumulative LOD-score of 3.03 with marker D3S 2305. The absence of recombination precluded further refinement of the disease interval. CONCLUSIONS: Our data confirm the genetic homogeneity and the extreme variability of expression, occasionally mimicking low tension glaucoma.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tutkimus suomalaisten yritysten liiketoimintamahdollisuuksista hiilidoksidipäästöjen vähentämisen parissa Luoteis-Venäjällä.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Within Data Envelopment Analysis, several alternative models allow for an environmental adjustment. The majority of them deliver divergent results. Decision makers face the difficult task of selecting the most suitable model. This study is performed to overcome this difficulty. By doing so, it fills a research gap. First, a two-step web-based survey is conducted. It aims (1) to identify the selection criteria, (2) to prioritize and weight the selection criteria with respect to the goal of selecting the most suitable model and (3) to collect the preferences about which model is preferable to fulfil each selection criterion. Second, Analytic Hierarchy Process is used to quantify the preferences expressed in the survey. Results show that the understandability, the applicability and the acceptability of the alternative models are valid selection criteria. The selection of the most suitable model depends on the preferences of the decision makers with regards to these criteria.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tutkimuksen päätavoitteena oli atk-avusteisen tilintarkastuksen käytön tutkiminen. Tutkimus jakaantuu teoreettiseen ja empiiriseen osaan. Teoriaosuudessa käydään läpi tilintarkastusprosessia ja esitellään tietokoneavusteisen tilintarkastuksen työvälineitä sekä arvioidaan kirjallisuuden ja muun lähdeaineiston perusteella atk:n tuottamia hyötyjä ja aiheuttamia riskejä. Empiriaosuudessa tutkittiin tilintarkastajille suunnatun kyselytutkimuksen avulla miten laajaa atk:n hyväksikäyttö on tilintarkastusmaailmassa ja miten tilintarkastajat itse näkevät sen tuomat hyödyt ja haitat sekä atk-avusteisen tilintarkastuksen kehittymisen lähitulevaisuudessa. Tutkimustuloksia verrataan aikaisemmin samasta aihepiiristä tehtyjen tutkimusten tuloksiin. Tutkimustuloksia verrattaessa käy ilmi, että tietokoneen käyttö ja hyödyntäminen tilintarkastustyössä on selvästi lisääntynyt. On huomattava, että atk:n mukaantulo tilintarkastustoimintaan tuo mukanaan ongelmia, jotka tulee tiedostaa, mutta atk:n tuottamien lisäetujen määrä on niin huomattava, että tulevaisuudessa tehokas tilintarkastustyö ei onnistu ilman atk-avusteisia menetelmiä.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Monte Carlo simulations were used to generate data for ABAB designs of different lengths. The points of change in phase are randomly determined before gathering behaviour measurements, which allows the use of a randomization test as an analytic technique. Data simulation and analysis can be based either on data-division-specific or on common distributions. Following one method or another affects the results obtained after the randomization test has been applied. Therefore, the goal of the study was to examine these effects in more detail. The discrepancies in these approaches are obvious when data with zero treatment effect are considered and such approaches have implications for statistical power studies. Data-division-specific distributions provide more detailed information about the performance of the statistical technique.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Pulseless electrical activity (PEA) cardiac arrest is defined as a cardiac arrest (CA) presenting with a residual organized electrical activity on the electrocardiogram. In the last decades, the incidence of PEA has regularly increased, compared to other types of CA like ventricular fibrillation or pulseless ventricular tachycardia. PEA is frequently induced by reversible conditions. The "4 (or 5) H" & "4 (or 5) T" are proposed as a mnemonic to asses for Hypoxia, Hypovolemia, Hypo- /Hyperkalaemia, Hypothermia, Thrombosis (cardiac or pulmonary), cardiac Tamponade, Toxins, and Tension pneumothorax. Other pathologies (intracranial haemorrhage, severe sepsis, myocardial contraction dysfunction) have been identified as potential causes for PEA, but their respective probability and frequencies are unclear and they are not yet included into the resuscitation guidelines. The aim of this study was to analyse the aetiologies of PEA out-of-hospital CA, in order to evaluate the relative frequencies of each cause and therefore to improve the management of patients suffering a PEA cardiac arrest. Method: This retrospective study was based on data routinely and prospectively collected for each PEMS intervention. All adult patients treated from January 1st 2002 to December 2012 31st by the PEMS for out-of-hospital cardiac arrest, with PEA as the first recorded rhythm, and admitted to the emergency department (ED) of the Lausanne University Hospital were included. The aetiologies of PEA cardiac arrest were classified into subgroups, based on the classical H&T's classification, supplemented by four other subgroups analysis: trauma, intra-cranial haemorrhage (ICH), non-ischemic cardiomyopathy (NIC) and undetermined cause. Results: 1866 OHCA were treated by the PEMS. PEA was the first recorded rhythm in 240 adult patients (13.8 %). After exclusion of 96 patients, 144 patients with a PEA cardiac arrest admitted to the ED were included in the analysis. The mean age was 63.8 ± 20.0 years, 58.3% were men and the survival rate at 48 hours was 29%. 32 different causes of OHCA PEA were established for 119 patients. For 25 patients (17.4 %), we were unable to attribute a specific cause for the PEA cardiac arrest. Hypoxia (23.6 %), acute coronary syndrome (12.5%) and trauma (12.5 %) were the three most frequent causes. Pulmonary embolism, Hypovolemia, Intoxication and Hyperkaliemia occurs in less than 10% of the cases (7.6 %, 5.6 %, 3.5%, respectively 2.1 %). Non ischemic cardiomyopathy and intra-cranial haemorrhage occur in 8.3 % and 6.9 %, respectively. Conclusions: According to our results, intra-cranial haemorrhage and non-ischemic cardiomyopathy represent noticeable causes of PEA in OHCA, with a prevalence equalling or exceeding the frequency of classical 4 H's and 4 T's aetiologies. These two pathologies are potentially accessible to simple diagnostic procedures (native CT-scan or echocardiography) and should be included into the 4 H's and 4 T's mnemonic.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article reviews data obtained through research into early childhood mathematics education in Spain. It analyses the current curricular directions in mathematics education with early learners. It also provides an overview of mathematical practices in early childhood education classrooms to analyse the commonalities and differences between research, curriculum and educational practice. A review of the research presented at SEIEM symposia from 1997 until 2012 demonstrates: a) very little research has been done, a trend that is repeated in other areas, such as the JCR-Social Sciences Edition or the PME; b) the first steps have been taken to create a more and more cohesive body of research, although until now there has not been enough data to outline the curricular directions; and c) some discrepancies still exist between the mathematical practices in early childhood education classrooms and the official guidelines

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article suggests the study of the key concept of conflict as a means of implementing a critical and communicativecurriculum based on the study of relevant social themes. To this end we put forward the principal characteristics of thecritical/communicative curriculum. We offer a didactic proposal about conflict and explain the results of its application intwo Secondary Education classrooms

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study evaluates the use of role-playing games (RPGs) as a methodological approach for teaching cellular biology, assessing student satisfaction, learning outcomes, and retention of acquired knowledge. First-year undergraduate medical students at two Brazilian public universities attended either an RPG-based class (RPG group) or a lecture (lecture-based group) on topics related to cellular biology. Pre- and post-RPG-based class questionnaires were compared to scores in regular exams and in an unannounced test one year later to assess students' attitudes and learning. From the 230 students that attended the RPG classes, 78.4% responded that the RPG-based classes were an effective tool for learning; 55.4% thought that such classes were better than lectures but did not replace them; and 81% responded that they would use this method. The lecture-based group achieved a higher grade in 1 of 14 regular exam questions. In the medium-term evaluation (one year later), the RPG group scored higher in 2 of 12 questions. RPG classes are thus quantitatively as effective as formal lectures, are well accepted by students, and may serve as educational tools, giving students the chance to learn actively and potentially retain the acquired knowledge more efficiently.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

After decades of mergers and acquisitions and successive technology trends such as CRM, ERP and DW, the data in enterprise systems is scattered and inconsistent. Global organizations face the challenge of addressing local uses of shared business entities, such as customer and material, and at the same time have a consistent, unique, and consolidate view of financial indicators. In addition, current enterprise systems do not accommodate the pace of organizational changes and immense efforts are required to maintain data. When it comes to systems integration, ERPs are considered “closed” and expensive. Data structures are complex and the “out-of-the-box” integration options offered are not based on industry standards. Therefore expensive and time-consuming projects are undertaken in order to have required data flowing according to business processes needs. Master Data Management (MDM) emerges as one discipline focused on ensuring long-term data consistency. Presented as a technology-enabled business discipline, it emphasizes business process and governance to model and maintain the data related to key business entities. There are immense technical and organizational challenges to accomplish the “single version of the truth” MDM mantra. Adding one central repository of master data might prove unfeasible in a few scenarios, thus an incremental approach is recommended, starting from areas most critically affected by data issues. This research aims at understanding the current literature on MDM and contrasting it with views from professionals. The data collected from interviews revealed details on the complexities of data structures and data management practices in global organizations, reinforcing the call for more in-depth research on organizational aspects of MDM. The most difficult piece of master data to manage is the “local” part, the attributes related to the sourcing and storing of materials in one particular warehouse in The Netherlands or a complex set of pricing rules for a subsidiary of a customer in Brazil. From a practical perspective, this research evaluates one MDM solution under development at a Finnish IT solution-provider. By means of applying an existing assessment method, the research attempts at providing the company with one possible tool to evaluate its product from a vendor-agnostics perspective.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation critically reviews the idea of meritocracy from both a theoretical and an empirical perspective. Based on a discussion of classical texts of social philosophy and sociology, it is argued that meritocracy as a concept for social stratification is best compatible with the sociological tradition of status attainment research: both frame social inequality in primarily individualistic terms, centring on the role of ascribed (e.g., gender, social background) and achieved (e.g., educational qualifications) characteristics for determining individuals’ socioeconomic rewards. This theoretical argument introduces the research problem at the core of this dissertation: to what extent can the individualistic conception of social stratification be maintained empirically? Fields of study and their interaction with educational attainment levels play a prominent role in the analysis of this question. Drawing on sociological versions of segmented labour market theory, it is assumed that fields of study may channel individuals into heterogeneous political-economic contexts on the labour market, which potentially modify the socioeconomic benefit individuals derive from their qualification levels. The focus on fields of study may also highlight economic differentials between men and women that derive from the persisting segregation of men’s and women’s occupational and educational specializations rather than direct gender discrimination on the labour market. The quantitative analyses in this dissertation consist of three research articles, which are based primarily on Finnish data, but occasionally extend the view to other European countries. The data sources include register-based macro- and microdata as well as survey data. Article I examines the extent and the patterns of gender segregation within the Finnish educational system between 1981 and 2005. The results show that differences between men’s and women’s field specializations have for the most part remained stable during this period, with particularly high levels of gender segregation observed at lower educational levels. The focus in Article II rests on the effects of gender-segregated fields of study on higher education graduates’ occupational status. It is shown that fields of study matter for accessing professional jobs and avoiding low-skilled positions in Finland: at the early career stage, particularly polytechnic graduates from female-dominated fields are less likely to work in professional positions. Finnish university graduates from male-dominated fields were more likely than their peers with different specializations to work as professionals, yet they also faced a greater risk of being sorted into lowskilled jobs if they failed to make use of this advantage. Article III proceeded to analyse the joint impact of educational qualification levels and fields of study on young adults’ median earnings in Finland between 1985 and 2005. The results show that qualification levels do not confer a consistent benefit in the process of earnings stratification. Advanced qualifications raise median earnings most clearly among individuals specializing in the same field of study. When comparing individuals with different field specializations, on the other hand, higher-level qualifications do not necessarily lead to higher median earnings. Overall, the findings of this dissertation reveal a heterogeneous effect of education for achieving social positions, which challenges individual-centred, meritocratic accounts of social stratification and underlines the problematic lack of structural and institutional dimensions in the dominant account of social status attainment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Feature extraction is the part of pattern recognition, where the sensor data is transformed into a more suitable form for the machine to interpret. The purpose of this step is also to reduce the amount of information passed to the next stages of the system, and to preserve the essential information in the view of discriminating the data into different classes. For instance, in the case of image analysis the actual image intensities are vulnerable to various environmental effects, such as lighting changes and the feature extraction can be used as means for detecting features, which are invariant to certain types of illumination changes. Finally, classification tries to make decisions based on the previously transformed data. The main focus of this thesis is on developing new methods for the embedded feature extraction based on local non-parametric image descriptors. Also, feature analysis is carried out for the selected image features. Low-level Local Binary Pattern (LBP) based features are in a main role in the analysis. In the embedded domain, the pattern recognition system must usually meet strict performance constraints, such as high speed, compact size and low power consumption. The characteristics of the final system can be seen as a trade-off between these metrics, which is largely affected by the decisions made during the implementation phase. The implementation alternatives of the LBP based feature extraction are explored in the embedded domain in the context of focal-plane vision processors. In particular, the thesis demonstrates the LBP extraction with MIPA4k massively parallel focal-plane processor IC. Also higher level processing is incorporated to this framework, by means of a framework for implementing a single chip face recognition system. Furthermore, a new method for determining optical flow based on LBPs, designed in particular to the embedded domain is presented. Inspired by some of the principles observed through the feature analysis of the Local Binary Patterns, an extension to the well known non-parametric rank transform is proposed, and its performance is evaluated in face recognition experiments with a standard dataset. Finally, an a priori model where the LBPs are seen as combinations of n-tuples is also presented

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this study was to assess the potential utilization of ostrich meat trimming in hamburger preparation, as well as its physicochemical and sensory characterization. Using ostrich meat trimmings from the legs and neck, four different formulations were prepared with varied amounts of bacon and textured soybean protein. Physical analysis of yield, shrinkage percentage, and water retention capacity and chemical analysis of proximate composition, cholesterol levels, and calories were performed. The formulations underwent sensory analysis by 52 potential ostrich meat consumers, who evaluated tenderness, juiciness, flavor, and purchase intent. The formulations containing textured soybean protein showed the highest yield, lowest shrinkage percentage, and highest water retention capacity. Lipid content varied from 0.58 to 4.99%; protein from 17.08 to 21.37%; ash from 3.00 to 3.62%; moisture from 73.87 to 76.27%; cholesterol from 22.54 to 32.11 mg.100 g-1; and calorie from 87.22 to 163.42 kcal.100 g-1. All formulations showed low cholesterol and calorie levels, even that containing 10% bacon and 3.5% textured soybean protein, which achieved the best scores and acceptance by the panelists.