934 resultados para Data anonymization and sanitization
Resumo:
Genomics is expanding the horizons of epidemiology, providing a new dimension for classical epidemiological studies and inspiring the development of large-scale multicenter studies with the statistical power necessary for the assessment of gene-gene and gene-environment interactions in cancer etiology and prognosis. This paper describes the methodology of the Clinical Genome of Cancer Project in São Paulo, Brazil (CGCP), which includes patients with nine types of tumors and controls. Three major epidemiological designs were used to reach specific objectives: cross-sectional studies to examine gene expression, case-control studies to evaluate etiological factors, and follow-up studies to analyze genetic profiles in prognosis. The clinical groups included patients' data in the electronic database through the Internet. Two approaches were used for data quality control: continuous data evaluation and data entry consistency. A total of 1749 cases and 1509 controls were entered into the CGCP database from the first trimester of 2002 to the end of 2004. Continuous evaluation showed that, for all tumors taken together, only 0.5% of the general form fields still included potential inconsistencies by the end of 2004. Regarding data entry consistency, the highest percentage of errors (11.8%) was observed for the follow-up form, followed by 6.7% for the clinical form, 4.0% for the general form, and only 1.1% for the pathology form. Good data quality is required for their transformation into useful information for clinical application and for preventive measures. The use of the Internet for communication among researchers and for data entry is perhaps the most innovative feature of the CGCP. The monitoring of patients' data guaranteed their quality.
Resumo:
Our objective was to examine the effet of gender on the sleep pattern of patients referred to a sleep laboratory. The data (questionnaires and polysomnographic recordings) were collected from a total of 2365 patients (1550 men and 815 women). The polysomnography permits an objective assessment of the sleep pattern. We included only polysomnography exams obtained with no more than one recording system in order to permit normalization of the data. Men had a significantly higher body mass index than women (28.5 ± 4.8 vs 27.7 ± 6.35 kg/m²) and had a significantly higher score on the Epworth Sleepiness Scale (10.8 ± 5.3 vs 9.5 ± 6.0), suggesting daytime sleepiness. Women had a significantly higher sleep latency than men, as well as a higher rapid eye movement (REM) latency. Men spent more time in stages 1 (4.6 ± 4.1 vs 3.9 ± 3.8) and 2 (57.0 ± 10.5 vs 55.2 ± 10.1) of non-REM sleep than women, whereas women spent significantly more time in deep sleep stages (3 and 4) than men (22.6 ± 9.0 vs 19.9 ± 9.0). The apnea/hypopnea and arousal indexes were significantly higher and more frequent in men than in women (31.0 ± 31.5 vs 17.3 ± 19.7). Also, periodic leg movement index did not differ significantly between genders, but rather differed among age groups. We did not find significant differences between genders in the percentage of REM sleep and sleep efficiency. The results of the current study suggest that there are specific gender differences in sleep pattern.
Resumo:
Vertebrates have a central clock and also several peripheral clocks. Light responses might result from the integration of light signals by these clocks. The dermal melanophores of Xenopus laevis have a photoreceptor molecule denominated melanopsin (OPN4x). The mechanisms of the circadian clock involve positive and negative feedback. We hypothesize that these dermal melanophores also present peripheral clock characteristics. Using quantitative PCR, we analyzed the pattern of temporal expression of Opn4x and the clock genes Per1, Per2, Bmal1, and Clock in these cells, subjected to a 14-h light:10-h dark (14L:10D) regime or constant darkness (DD). Also, in view of the physiological role of melatonin in the dermal melanophores of X. laevis, we determined whether melatonin modulates the expression of these clock genes. These genes show a time-dependent expression pattern when these cells are exposed to 14L:10D, which differs from the pattern observed under DD. Cells kept in DD for 5 days exhibited overall increased mRNA expression for Opn4x and Clock, and a lower expression for Per1, Per2, and Bmal1. When the cells were kept in DD for 5 days and treated with melatonin for 1 h, 24 h before extraction, the mRNA levels tended to decrease for Opn4x and Clock, did not change for Bmal1, and increased for Per1 and Per2 at different Zeitgeber times (ZT). Although these data are limited to one-day data collection, and therefore preliminary, we suggest that the dermal melanophores of X. laevis might have some characteristics of a peripheral clock, and that melatonin modulates, to a certain extent, melanopsin and clock gene expression.
Resumo:
Our objective was to observe the biodegradable and osteogenic properties of magnesium scaffolding under in vivo conditions. Twelve 6-month-old male New Zealand white rabbits were randomly divided into two groups. The chosen operation site was the femoral condyle on the right side. The experimental group was implanted with porous magnesium scaffolds, while the control group was implanted with hydroxyapatite scaffolds. X-ray and blood tests, which included serum magnesium, alanine aminotransferase (ALT), creatinine (CREA), and blood urea nitrogen (BUN) were performed serially at 1, 2, and 3 weeks, and 1, 2, and 3 months. All rabbits were killed 3 months postoperatively, and the heart, kidney, spleen, and liver were analyzed with hematoxylin and eosin (HE) staining. The bone samples were subjected to microcomputed tomography scanning (micro-CT) and hard tissue biopsy. SPSS 13.0 (USA) was used for data analysis, and values of P<0.05 were considered to be significant. Bubbles appeared in the X-ray of the experimental group after 2 weeks, whereas there was no gas in the control group. There were no statistical differences for the serum magnesium concentrations, ALT, BUN, and CREA between the two groups (P>0.05). All HE-stained slices were normal, which suggested good biocompatibility of the scaffold. Micro-CT showed that magnesium scaffolds degraded mainly from the outside to inside, and new bone was ingrown following the degradation of magnesium scaffolds. The hydroxyapatite scaffold was not degraded and had fewer osteoblasts scattered on its surface. There was a significant difference in the new bone formation and scaffold bioabsorption between the two groups (9.29±1.27 vs 1.40±0.49 and 7.80±0.50 vs 0.00±0.00 mm3, respectively; P<0.05). The magnesium scaffold performed well in degradation and osteogenesis, and is a promising material for orthopedics.
Resumo:
Sales and operations research publications have increased significantly in the last decades. The concept of sales and operations planning (S&OP) has gained increased recognition and has been put forward as the area within Supply Chain Management (SCM). Development of S&OP is based on the need for determining future actions, both for sales and operations, since off-shoring, outsourcing, complex supply chains and extended lead times make challenges for responding to changes in the marketplace when they occur. Order intake of the case company has grown rapidly during the last years. Along with the growth, new challenges considering data management and information flow have arisen due to increasing customer orders. To manage these challenges, case company has implemented S&OP process, though initial process is in early stage and due to this, the process is not managing the increased customer orders adequately. Thesis objective is to explore extensively the S&OP process content of the case company and give further recommendations. Objectives are categorized into six different groups, to clarify the purpose of this thesis. Qualitative research methods used are active participant observation, qualitative interviews, enquiry, education, and a workshop. It is notable that demand planning was felt as cumbersome, so it is typically the biggest challenge in S&OP process. More proactive the sales forecasting can be, more expanded the time horizon of operational planning will turn out. S&OP process is 60 percent change management, 30 percent process development and 10 percent technology. The change management and continuous improvement can sometimes be arduous and set as secondary. It is important that different people are required to improve the process and the process is constantly evaluated. As well as, process governance is substantially in a central role and it has to be managed consciously. Generally, S&OP process was seen important and all the stakeholders were committed to the process. Particular sections were experienced more important than others, depending on the stakeholders’ point of views. Recommendations to objective groups are evaluated by the achievable benefit and resource requirement. The urgent and easily implemented improvement recommendations should be executed firstly. Next steps are to develop more coherent process structure and refine cost awareness. Afterwards demand planning, supply planning, and reporting should be developed more profoundly. For last, information technology system should be implemented to support the process phases.
Resumo:
The emerging technologies have recently challenged the libraries to reconsider their role as a mere mediator between the collections, researchers, and wider audiences (Sula, 2013), and libraries, especially the nationwide institutions like national libraries, haven’t always managed to face the challenge (Nygren et al., 2014). In the Digitization Project of Kindred Languages, the National Library of Finland has become a node that connects the partners to interplay and work for shared goals and objectives. In this paper, I will be drawing a picture of the crowdsourcing methods that have been established during the project to support both linguistic research and lingual diversity. The National Library of Finland has been executing the Digitization Project of Kindred Languages since 2012. The project seeks to digitize and publish approximately 1,200 monograph titles and more than 100 newspapers titles in various, and in some cases endangered Uralic languages. Once the digitization has been completed in 2015, the Fenno-Ugrica online collection will consist of 110,000 monograph pages and around 90,000 newspaper pages to which all users will have open access regardless of their place of residence. The majority of the digitized literature was originally published in the 1920s and 1930s in the Soviet Union, and it was the genesis and consolidation period of literary languages. This was the era when many Uralic languages were converted into media of popular education, enlightenment, and dissemination of information pertinent to the developing political agenda of the Soviet state. The ‘deluge’ of popular literature in the 1920s to 1930s suddenly challenged the lexical orthographic norms of the limited ecclesiastical publications from the 1880s onward. Newspapers were now written in orthographies and in word forms that the locals would understand. Textbooks were written to address the separate needs of both adults and children. New concepts were introduced in the language. This was the beginning of a renaissance and period of enlightenment (Rueter, 2013). The linguistically oriented population can also find writings to their delight, especially lexical items specific to a given publication, and orthographically documented specifics of phonetics. The project is financially supported by the Kone Foundation in Helsinki and is part of the Foundation’s Language Programme. One of the key objectives of the Kone Foundation Language Programme is to support a culture of openness and interaction in linguistic research, but also to promote citizen science as a tool for the participation of the language community in research. In addition to sharing this aspiration, our objective within the Language Programme is to make sure that old and new corpora in Uralic languages are made available for the open and interactive use of the academic community as well as the language societies. Wordlists are available in 17 languages, but without tokenization, lemmatization, and so on. This approach was verified with the scholars, and we consider the wordlists as raw data for linguists. Our data is used for creating the morphological analyzers and online dictionaries at the Helsinki and Tromsø Universities, for instance. In order to reach the targets, we will produce not only the digitized materials but also their development tools for supporting linguistic research and citizen science. The Digitization Project of Kindred Languages is thus linked with the research of language technology. The mission is to improve the usage and usability of digitized content. During the project, we have advanced methods that will refine the raw data for further use, especially in the linguistic research. How does the library meet the objectives, which appears to be beyond its traditional playground? The written materials from this period are a gold mine, so how could we retrieve these hidden treasures of languages out of the stack that contains more than 200,000 pages of literature in various Uralic languages? The problem is that the machined-encoded text (OCR) contains often too many mistakes to be used as such in research. The mistakes in OCRed texts must be corrected. For enhancing the OCRed texts, the National Library of Finland developed an open-source code OCR editor that enabled the editing of machine-encoded text for the benefit of linguistic research. This tool was necessary to implement, since these rare and peripheral prints did often include already perished characters, which are sadly neglected by the modern OCR software developers, but belong to the historical context of kindred languages and thus are an essential part of the linguistic heritage (van Hemel, 2014). Our crowdsourcing tool application is essentially an editor of Alto XML format. It consists of a back-end for managing users, permissions, and files, communicating through a REST API with a front-end interface—that is, the actual editor for correcting the OCRed text. The enhanced XML files can be retrieved from the Fenno-Ugrica collection for further purposes. Could the crowd do this work to support the academic research? The challenge in crowdsourcing lies in its nature. The targets in the traditional crowdsourcing have often been split into several microtasks that do not require any special skills from the anonymous people, a faceless crowd. This way of crowdsourcing may produce quantitative results, but from the research’s point of view, there is a danger that the needs of linguists are not necessarily met. Also, the remarkable downside is the lack of shared goal or the social affinity. There is no reward in the traditional methods of crowdsourcing (de Boer et al., 2012). Also, there has been criticism that digital humanities makes the humanities too data-driven and oriented towards quantitative methods, losing the values of critical qualitative methods (Fish, 2012). And on top of that, the downsides of the traditional crowdsourcing become more imminent when you leave the Anglophone world. Our potential crowd is geographically scattered in Russia. This crowd is linguistically heterogeneous, speaking 17 different languages. In many cases languages are close to extinction or longing for language revitalization, and the native speakers do not always have Internet access, so an open call for crowdsourcing would not have produced appeasing results for linguists. Thus, one has to identify carefully the potential niches to complete the needed tasks. When using the help of a crowd in a project that is aiming to support both linguistic research and survival of endangered languages, the approach has to be a different one. In nichesourcing, the tasks are distributed amongst a small crowd of citizen scientists (communities). Although communities provide smaller pools to draw resources, their specific richness in skill is suited for complex tasks with high-quality product expectations found in nichesourcing. Communities have a purpose and identity, and their regular interaction engenders social trust and reputation. These communities can correspond to research more precisely (de Boer et al., 2012). Instead of repetitive and rather trivial tasks, we are trying to utilize the knowledge and skills of citizen scientists to provide qualitative results. In nichesourcing, we hand in such assignments that would precisely fill the gaps in linguistic research. A typical task would be editing and collecting the words in such fields of vocabularies where the researchers do require more information. For instance, there is lack of Hill Mari words and terminology in anatomy. We have digitized the books in medicine, and we could try to track the words related to human organs by assigning the citizen scientists to edit and collect words with the OCR editor. From the nichesourcing’s perspective, it is essential that altruism play a central role when the language communities are involved. In nichesourcing, our goal is to reach a certain level of interplay, where the language communities would benefit from the results. For instance, the corrected words in Ingrian will be added to an online dictionary, which is made freely available for the public, so the society can benefit, too. This objective of interplay can be understood as an aspiration to support the endangered languages and the maintenance of lingual diversity, but also as a servant of ‘two masters’: research and society.
Resumo:
The whole research of the current Master Thesis project is related to Big Data transfer over Parallel Data Link and my main objective is to assist the Saint-Petersburg National Research University ITMO research team to accomplish this project and apply Green IT methods for the data transfer system. The goal of the team is to transfer Big Data by using parallel data links with SDN Openflow approach. My task as a team member was to compare existing data transfer applications in case to verify which results the highest data transfer speed in which occasions and explain the reasons. In the context of this thesis work a comparison between 5 different utilities was done, which including Fast Data Transfer (FDT), BBCP, BBFTP, GridFTP, and FTS3. A number of scripts where developed which consist of creating random binary data to be incompressible to have fair comparison between utilities, execute the Utilities with specified parameters, create log files, results, system parameters, and plot graphs to compare the results. Transferring such an enormous variety of data can take a long time, and hence, the necessity appears to reduce the energy consumption to make them greener. In the context of Green IT approach, our team used Cloud Computing infrastructure called OpenStack. It’s more efficient to allocated specific amount of hardware resources to test different scenarios rather than using the whole resources from our testbed. Testing our implementation with OpenStack infrastructure results that the virtual channel does not consist of any traffic and we can achieve the highest possible throughput. After receiving the final results we are in place to identify which utilities produce faster data transfer in different scenarios with specific TCP parameters and we can use them in real network data links.
Integration of marketing research data in new product development. Case study: Food industry company
Resumo:
The aim of this master’s thesis is to provide a real life example of how marketing research data is used by different functions in the NPD process. In order to achieve this goal, a case study in a company was implemented where gathering, analysis, distribution and synthesis of marketing research data in NPD were studied. The main research question was formulated as follows: How is marketing research data integrated and used by different company functions in the NPD process? The theory part of the master’s thesis was focused on the discussion of the marketing function role in NPD, use of marketing research particularly in the food industry, as well as issues related to the marketing/R&D interface during the NPD process. The empirical part of the master’s thesis was based on qualitative explanatory case study research. Individual in-depth interviews with company representatives, company documents and online research were used for data collection and analyzed through triangulation method. The empirical findings advocate that the most important marketing data sources at the concept generation stage of NPD are: global trends monitoring, retailing audit and consumers insights. These data sets are crucial for establishing the potential of the product on the market and defining the desired features for the new product to be developed. The findings also suggest the example of successful crossfunctional communication during the NPD process with formal and informal communication patterns. General managerial recommendations are given on the integration in NPD of a strategy, process, continuous improvement, and motivated cross-functional product development teams.
Resumo:
The effectiveness of cleaning and sanitizing procedures in controlling Staphylococcus aureus, Salmonella Enteritidis, and Pseudomonasfluorescens adhered to granite and stainless steel was evaluated. There was no significant difference (p > 0.05) in the adherence of pure cultures of these microorganisms to stainless steel. The numbers of P. fluorescens and S. Enteritidis adhered to granite were greater (p < 0.05) than the numbers of S. aureus. Additionally, the adherence of P. fluorescens was similar to the adherence of S. Enteritidis on granite surface. In a mixed culture with P. fluorescens, S aureus adhered less (p < 0.05) to stainless steel surfaces (1.31 log CFU.cm-2) than when in a pure culture (6.10 log CFU.cm-2). These results suggest that P. fluorescens inhibited the adherence of S. aureus. However, this inhibition was not observed in the adherence process for granite. There was a significant difference (p < 0.05) between the number of adhered cells before and after pre-washing for S. aureus on stainless steel and granite surfaces, and after washing with detergent for all microorganisms and surfaces. The efficiency of the cleaning plus sanitizing procedures was not significantly different (p > 0.05) between the surfaces. However, a significant difference was observed (p < 0.05) between the sanitizer solutions. Sodium hypochlorite and peracetic acid were more bactericidal (p < 0.05) than a quaternary ammonium compound. With regard to microorganisms, S. aureus was the least resistant to the sanitizers. These results show the importance of good cleaning and sanitization procedures to prevent bacterial adherence and biofilm formation.
Resumo:
Response Surface Methodology (RSM) was applied to evaluate the chromatic features and sensory acceptance of emulsions that combine Soy Protein (SP) and red Guava Juice (GJ). The parameters analyzed were: instrumental color based on the coordinates a* (redness), b* (yellowness), L* (lightness), C* (chromaticity), h* (hue angle), visual color, acceptance, and appearance. The analyses of the results showed that GJ was responsible for the high measured values of red color, hue angle, chromaticity, acceptance, and visual color, whereas SP was the variable that increased the yellowness intensity of the assays. The redness (R²adj = 74.86%, p < 0.01) and hue angle (R²adj = 80.96%, p < 0.01) were related to the independent variables by linear models, while the sensory data (color and acceptance) could not be modeled due to a high variability. The models of yellowness, lightness, and chromaticity did not present lack of fit but presented adjusted determination coefficients bellow 70%. Notwithstanding, the linear correlations between sensory and instrumental data were not significant (p > 0.05) and low Pearson coefficients were obtained. The results showed that RSM is a useful tool to develop soy-based emulsions and model some chromatic features of guava-based emulsions through RSM.
Resumo:
The objective of this study was to predict by means of Artificial Neural Network (ANN), multilayer perceptrons, the texture attributes of light cheesecurds perceived by trained judges based on instrumental texture measurements. Inputs to the network were the instrumental texture measurements of light cheesecurd (imitative and fundamental parameters). Output variables were the sensory attributes consistency and spreadability. Nine light cheesecurd formulations composed of different combinations of fat and water were evaluated. The measurements obtained by the instrumental and sensory analyses of these formulations constituted the data set used for training and validation of the network. Network training was performed using a back-propagation algorithm. The network architecture selected was composed of 8-3-9-2 neurons in its layers, which quickly and accurately predicted the sensory texture attributes studied, showing a high correlation between the predicted and experimental values for the validation data set and excellent generalization ability, with a validation RMSE of 0.0506.
Resumo:
This thesis presented the overview of Open Data research area, quantity of evidence and establishes the research evidence based on the Systematic Mapping Study (SMS). There are 621 such publications were identified published between years 2005 and 2014, but only 243 were selected in the review process. This thesis highlights the implications of Open Data principals’ proliferation in the emerging era of the accessibility, reusability and sustainability of data transparency. The findings of mapping study are described in quantitative and qualitative measurement based on the organization affiliation, countries, year of publications, research method, star rating and units of analysis identified. Furthermore, units of analysis were categorized by development lifecycle, linked open data, type of data, technical platforms, organizations, ontology and semantic, adoption and awareness, intermediaries, security and privacy and supply of data which are important component to provide a quality open data applications and services. The results of the mapping study help the organizations (such as academia, government and industries), re-searchers and software developers to understand the existing trend of open data, latest research development and the demand of future research. In addition, the proposed conceptual framework of Open Data research can be adopted and expanded to strengthen and improved current open data applications.
Resumo:
In the new age of information technology, big data has grown to be the prominent phenomena. As information technology evolves, organizations have begun to adopt big data and apply it as a tool throughout their decision-making processes. Research on big data has grown in the past years however mainly from a technical stance and there is a void in business related cases. This thesis fills the gap in the research by addressing big data challenges and failure cases. The Technology-Organization-Environment framework was applied to carry out a literature review on trends in Business Intelligence and Knowledge management information system failures. A review of extant literature was carried out using a collection of leading information system journals. Academic papers and articles on big data, Business Intelligence, Decision Support Systems, and Knowledge Management systems were studied from both failure and success aspects in order to build a model for big data failure. I continue and delineate the contribution of the Information System failure literature as it is the principal dynamics behind technology-organization-environment framework. The gathered literature was then categorised and a failure model was developed from the identified critical failure points. The failure constructs were further categorized, defined, and tabulated into a contextual diagram. The developed model and table were designed to act as comprehensive starting point and as general guidance for academics, CIOs or other system stakeholders to facilitate decision-making in big data adoption process by measuring the effect of technological, organizational, and environmental variables with perceived benefits, dissatisfaction and discontinued use.
Resumo:
In the new age of information technology, big data has grown to be the prominent phenomena. As information technology evolves, organizations have begun to adopt big data and apply it as a tool throughout their decision-making processes. Research on big data has grown in the past years however mainly from a technical stance and there is a void in business related cases. This thesis fills the gap in the research by addressing big data challenges and failure cases. The Technology-Organization-Environment framework was applied to carry out a literature review on trends in Business Intelligence and Knowledge management information system failures. A review of extant literature was carried out using a collection of leading information system journals. Academic papers and articles on big data, Business Intelligence, Decision Support Systems, and Knowledge Management systems were studied from both failure and success aspects in order to build a model for big data failure. I continue and delineate the contribution of the Information System failure literature as it is the principal dynamics behind technology-organization-environment framework. The gathered literature was then categorised and a failure model was developed from the identified critical failure points. The failure constructs were further categorized, defined, and tabulated into a contextual diagram. The developed model and table were designed to act as comprehensive starting point and as general guidance for academics, CIOs or other system stakeholders to facilitate decision-making in big data adoption process by measuring the effect of technological, organizational, and environmental variables with perceived benefits, dissatisfaction and discontinued use.
Resumo:
Outsourcing and offshoring or any combinations of these have not just become a popular phenomenon, but are viewed as one of the most important management strategies due to the new possibilities from globalization. They have been seen as a possibility to save costs and improve customer service. Executing offshoring and offshore outsourcing successfully can be more complex than initially expected. Potential cost savings resulting from of offshoring and offshore outsourcing are often based on lower manufacturing costs. However, these benefits might be conflicted by a more complex supply chain with service level challenges that can respectively increase costs. Therefore analyzing the total cost effects of offshoring and outsourcing is necessary. The aim of this Master´s Thesis was to to construct a total cost model using academic literature to calculate the total costs and analyze the reasonability of offshoring and offshore outsourcing production of a case company compared to insourcing production. The research data was mainly quantitative and collected mainly from the case company past sales and production records. In addition management level interviews from the case company were conducted. The information from these interviews was used for the qualification of the necessary quantitative data and adding supportive information that could not be gathered from the quantitative data. Both data collection and analysis were guided by a theoretical frame of reference that was based on academic literature concerning offshoring and outsourcing, statistical calculation of demand and total costs. The results confirm the theories that offshoring and offshore outsourcing would reduce total costs as both offshoring and offshore outsourcing options result in lower total annual costs than insourcing mainly due to lower manufacturing costs. However, increased demand uncertainty would make the alternative of offshore outsourcing more risky and difficult to manage. Therefore when assessing the overall impact of the alternatives, offshoring is the most preferable option. As the main cost savings in offshore outsourcing came from lower manufacturing costs, more specifically labour costs, the logistics costs in this case company did not have an essential effect in total costs. The management should therefore pay attention initially to manufacturing costs and then logistics costs when choosing the best production sourcing option for the company.