974 resultados para Process mean
Resumo:
En este trabajo de fin de grado se llevará a cabo la elaboración de una aplicación web de gestión de gastos personales desde sus inicios, hasta su completo funcionamiento. Estas aplicaciones poseen un crecimiento emergente en el mercado, lo cual implica que la competencia entre ellas es muy elevada. Por ello el diseño de la aplicación que se va a desarrollar en este trabajo ha sido delicadamente cuidado. Se trata de un proceso minucioso el cual aportará a cada una de las partes de las que va a constar la aplicación características únicas que se plasmaran en funcionalidades para el usuario, como son: añadir sus propios gastos e ingresos mensuales, confeccionar gráficos de sus principales gastos, obtención de consejos de una fuente externa, etc… Estas funcionalidades de carácter único junto con otras más generalistas, como son el diseño gráfico en una amplia gama de colores, harán su manejo más fácil e intuitivo. Hay que destacar que para optimizar su uso, la aplicación tendrá la característica de ser responsive, es decir, será capaz de modificar su interfaz según el tamaño de la pantalla del dispositivo desde el que se acceda. Para su desarrollo, se va a utilizar una de las tecnologías más novedosas del mercado y siendo una de las más revolucionarias del momento, MEAN.JS. Con esta innovadora tecnología se creará la aplicación de gestión económica de gastos personales. Gracias al carácter innovador de aplicar esta tecnología novedosa, los retos que plantea este proyecto son muy variados, desde cómo estructurar las carpetas del proyecto y toda la parte de backend hasta como realizar el diseño de la parte de frontend. Además una vez finalizado su desarrollo y puesta en marcha se analizaran posibles mejoras para poder perfeccionarla en su totalidad. ABSTRACT In this final degree project will take out the development of a web application from its inception, until its full performance management. These applications have an emerging market growth, implying that competition between them is very high. Therefore the design of the application that will be developed in this work has been delicately care. It's a painstaking process which will provide each of the parties which will contain the application unique features that were translated into functionality for the user, such as: add their own expenses and monthly income, make graphs of your major expenses, obtaining advice from an external source, etc... These features of unique character together with other more general, such as graphic design in a wide range of colors, will make more easy and intuitive handling. It should be noted that to optimize its use, the application will have the characteristic of being responsive, will be able to modify your interface according to the size of the screen of the device from which are accessed. For its development, it is to use one of the newest technologies on the market and being one of the most revolutionary moment, MEAN. JS. The economic management of personal expenses application will be created with this innovative technology. Thanks to the innovative nature of applying this new technology, the challenges posed by this project are varied, from how to structure the folders of the project and all the backend part up to how to perform the part of frontend design. In addition once finished its development and commissioning possible improvements will analyze to be able to perfect it in its entirety.
Resumo:
We report the study of the dynamics of the unbinding process under a force load f of adsorbed proteins (fibrinogen) on a solid surface (hydrophilic silica) by means of atomic force microscopy spectroscopy. By varying the loading rate rf, defined by f = rf t, t being the time, we find that, as for specific interactions, the mean rupture force increases with rf. This unbinding process is analyzed in the framework of the widely used Bell model. The typical dissociation rate at zero force entering in the model lies between 0.02 and 0.6 s−1. Each measured rupture is characterized by a force f0, which appears to be quantized in integer multiples of 180–200 pN.
Resumo:
Time series of brightness temperatures (T(B)) from the Advanced Microwave Scanning Radiometer-Earth Observing System (AMSR-E) are examined to determine ice phenology variables on the two largest lakes of northern Canada: Great Bear Lake (GBL) and Great Slave Lake (GSL). T(B) measurements from the 18.7, 23.8, 36.5, and 89.0 GHz channels (H- and V- polarization) are compared to assess their potential for detecting freeze-onset/melt-onset and ice-on/ice-off dates on both lakes. The 18.7 GHz (H-pol) channel is found to be the most suitable for estimating these ice dates as well as the duration of the ice cover and ice-free seasons. A new algorithm is proposed using this channel and applied to map all ice phenology variables on GBL and GSL over seven ice seasons (2002-2009). Analysis of the spatio-temporal patterns of each variable at the pixel level reveals that: (1) both freeze-onset and ice-on dates occur on average about one week earlier on GBL than on GSL (Day of Year (DY) 318 and 333 for GBL; DY 328 and 343 for GSL); (2) the freeze-up process or freeze duration (freeze-onset to ice-on) takes a slightly longer amount of time on GBL than on GSL (about 1 week on average); (3) melt-onset and ice-off dates occur on average one week and approximately four weeks later, respectively, on GBL (DY 143 and 183 for GBL; DY 135 and 157 for GSL); (4) the break-up process or melt duration (melt-onset to ice-off) lasts on average about three weeks longer on GBL; and (5) ice cover duration estimated from each individual pixel is on average about three weeks longer on GBL compared to its more southern counterpart, GSL. A comparison of dates for several ice phenology variables derived from other satellite remote sensing products (e.g. NOAA Interactive Multisensor Snow and Ice Mapping System (IMS), QuikSCAT, and Canadian Ice Service Database) show that, despite its relatively coarse spatial resolution, AMSR-E 18.7 GHz provides a viable means for monitoring of ice phenology on large northern lakes.
Resumo:
Evolutionary algorithms perform optimization using a population of sample solution points. An interesting development has been to view population-based optimization as the process of evolving an explicit, probabilistic model of the search space. This paper investigates a formal basis for continuous, population-based optimization in terms of a stochastic gradient descent on the Kullback-Leibler divergence between the model probability density and the objective function, represented as an unknown density of assumed form. This leads to an update rule that is related and compared with previous theoretical work, a continuous version of the population-based incremental learning algorithm, and the generalized mean shift clustering framework. Experimental results are presented that demonstrate the dynamics of the new algorithm on a set of simple test problems.
Resumo:
This accessible, practice-oriented and compact text provides a hands-on introduction to the principles of market research. Using the market research process as a framework, the authors explain how to collect and describe the necessary data and present the most important and frequently used quantitative analysis techniques, such as ANOVA, regression analysis, factor analysis, and cluster analysis. An explanation is provided of the theoretical choices a market researcher has to make with regard to each technique, as well as how these are translated into actions in IBM SPSS Statistics. This includes a discussion of what the outputs mean and how they should be interpreted from a market research perspective. Each chapter concludes with a case study that illustrates the process based on real-world data. A comprehensive web appendix includes additional analysis techniques, datasets, video files and case studies. Several mobile tags in the text allow readers to quickly browse related web content using a mobile device.
Resumo:
The concept of plagiarism is not uncommonly associated with the concept of intellectual property, both for historical and legal reasons: the approach to the ownership of ‘moral’, nonmaterial goods has evolved to the right to individual property, and consequently a need was raised to establish a legal framework to cope with the infringement of those rights. The solution to plagiarism therefore falls most often under two categories: ethical and legal. On the ethical side, education and intercultural studies have addressed plagiarism critically, not only as a means to improve academic ethics policies (PlagiarismAdvice.org, 2008), but mainly to demonstrate that if anything the concept of plagiarism is far from being universal (Howard & Robillard, 2008). Even if differently, Howard (1995) and Scollon (1994, 1995) argued, and Angèlil-Carter (2000) and Pecorari (2008) later emphasised that the concept of plagiarism cannot be studied on the grounds that one definition is clearly understandable by everyone. Scollon (1994, 1995), for example, claimed that authorship attribution is particularly a problem in non-native writing in English, and so did Pecorari (2008) in her comprehensive analysis of academic plagiarism. If among higher education students plagiarism is often a problem of literacy, with prior, conflicting social discourses that may interfere with academic discourse, as Angèlil-Carter (2000) demonstrates, we then have to aver that a distinction should be made between intentional and inadvertent plagiarism: plagiarism should be prosecuted when intentional, but if it is part of the learning process and results from the plagiarist’s unfamiliarity with the text or topic it should be considered ‘positive plagiarism’ (Howard, 1995: 796) and hence not an offense. Determining the intention behind the instances of plagiarism therefore determines the nature of the disciplinary action adopted. Unfortunately, in order to demonstrate the intention to deceive and charge students with accusations of plagiarism, teachers necessarily have to position themselves as ‘plagiarism police’, although it has been argued otherwise (Robillard, 2008). Practice demonstrates that in their daily activities teachers will find themselves being required a command of investigative skills and tools that they most often lack. We thus claim that the ‘intention to deceive’ cannot inevitably be dissociated from plagiarism as a legal issue, even if Garner (2009) asserts that generally plagiarism is immoral but not illegal, and Goldstein (2003) makes the same severance. However, these claims, and the claim that only cases of copyright infringement tend to go to court, have recently been challenged, mainly by forensic linguists, who have been actively involved in cases of plagiarism. Turell (2008), for instance, demonstrated that plagiarism is often connoted with an illegal appropriation of ideas. Previously, she (Turell, 2004) had demonstrated by comparison of four translations of Shakespeare’s Julius Caesar to Spanish that the use of linguistic evidence is able to demonstrate instances of plagiarism. This challenge is also reinforced by practice in international organisations, such as the IEEE, to whom plagiarism potentially has ‘severe ethical and legal consequences’ (IEEE, 2006: 57). What plagiarism definitions used by publishers and organisations have in common – and which the academia usually lacks – is their focus on the legal nature. We speculate that this is due to the relation they intentionally establish with copyright laws, whereas in education the focus tends to shift from the legal to the ethical aspects. However, the number of plagiarism cases taken to court is very small, and jurisprudence is still being developed on the topic. In countries within the Civil Law tradition, Turell (2008) claims, (forensic) linguists are seldom called upon as expert witnesses in cases of plagiarism, either because plagiarists are rarely taken to court or because there is little tradition of accepting linguistic evidence. In spite of the investigative and evidential potential of forensic linguistics to demonstrate the plagiarist’s intention or otherwise, this potential is restricted by the ability to identify a text as being suspect of plagiarism. In an era with such a massive textual production, ‘policing’ plagiarism thus becomes an extraordinarily difficult task without the assistance of plagiarism detection systems. Although plagiarism detection has attracted the attention of computer engineers and software developers for years, a lot of research is still needed. Given the investigative nature of academic plagiarism, plagiarism detection has of necessity to consider not only concepts of education and computational linguistics, but also forensic linguistics. Especially, if intended to counter claims of being a ‘simplistic response’ (Robillard & Howard, 2008). In this paper, we use a corpus of essays written by university students who were accused of plagiarism, to demonstrate that a forensic linguistic analysis of improper paraphrasing in suspect texts has the potential to identify and provide evidence of intention. A linguistic analysis of the corpus texts shows that the plagiarist acts on the paradigmatic axis to replace relevant lexical items with a related word from the same semantic field, i.e. a synonym, a subordinate, a superordinate, etc. In other words, relevant lexical items were replaced with related, but not identical, ones. Additionally, the analysis demonstrates that the word order is often changed intentionally to disguise the borrowing. On the other hand, the linguistic analysis of linking and explanatory verbs (i.e. referencing verbs) and prepositions shows that these have the potential to discriminate instances of ‘patchwriting’ and instances of plagiarism. This research demonstrates that the referencing verbs are borrowed from the original in an attempt to construct the new text cohesively when the plagiarism is inadvertent, and that the plagiarist has made an effort to prevent the reader from identifying the text as plagiarism, when it is intentional. In some of these cases, the referencing elements prove being able to identify direct quotations and thus ‘betray’ and denounce plagiarism. Finally, we demonstrate that a forensic linguistic analysis of these verbs is critical to allow detection software to identify them as proper paraphrasing and not – mistakenly and simplistically – as plagiarism.
Resumo:
Projection of a high-dimensional dataset onto a two-dimensional space is a useful tool to visualise structures and relationships in the dataset. However, a single two-dimensional visualisation may not display all the intrinsic structure. Therefore, hierarchical/multi-level visualisation methods have been used to extract more detailed understanding of the data. Here we propose a multi-level Gaussian process latent variable model (MLGPLVM). MLGPLVM works by segmenting data (with e.g. K-means, Gaussian mixture model or interactive clustering) in the visualisation space and then fitting a visualisation model to each subset. To measure the quality of multi-level visualisation (with respect to parent and child models), metrics such as trustworthiness, continuity, mean relative rank errors, visualisation distance distortion and the negative log-likelihood per point are used. We evaluate the MLGPLVM approach on the ‘Oil Flow’ dataset and a dataset of protein electrostatic potentials for the ‘Major Histocompatibility Complex (MHC) class I’ of humans. In both cases, visual observation and the quantitative quality measures have shown better visualisation at lower levels.
Resumo:
This work introduces a Gaussian variational mean-field approximation for inference in dynamical systems which can be modeled by ordinary stochastic differential equations. This new approach allows one to express the variational free energy as a functional of the marginal moments of the approximating Gaussian process. A restriction of the moment equations to piecewise polynomial functions, over time, dramatically reduces the complexity of approximate inference for stochastic differential equation models and makes it comparable to that of discrete time hidden Markov models. The algorithm is demonstrated on state and parameter estimation for nonlinear problems with up to 1000 dimensional state vectors and compares the results empirically with various well-known inference methodologies.
Resumo:
This work was supported by the Bulgarian National Science Fund under grant BY-TH-105/2005.
Resumo:
This paper is dedicated to modelling of network maintaining based on live example – maintaining ATM banking network, where any problems are mean money loss. A full analysis is made in order to estimate valuable and not-valuable parameters based on complex analysis of available data. Correlation analysis helps to estimate provided data and to produce a complex solution of increasing network maintaining effectiveness.
Resumo:
2000 Mathematics Subject Classification: 60J80.
Resumo:
2000 Mathematics Subject Classification: Primary 60J80, Secondary 62F12, 60G99.
Resumo:
Background aims: The cost-effective production of human mesenchymal stromal cells (hMSCs) for off-the-shelf and patient specific therapies will require an increasing focus on improving product yield and driving manufacturing consistency. Methods: Bone marrow-derived hMSCs (BM-hMSCs) from two donors were expanded for 36 days in monolayer with medium supplemented with either fetal bovine serum (FBS) or PRIME-XV serum-free medium (SFM). Cells were assessed throughout culture for proliferation, mean cell diameter, colony-forming potential, osteogenic potential, gene expression and metabolites. Results: Expansion of BM-hMSCs in PRIME-XV SFM resulted in a significantly higher growth rate (P < 0.001) and increased consistency between donors compared with FBS-based culture. FBS-based culture showed an inter-batch production range of 0.9 and 5 days per dose compared with 0.5 and 0.6 days in SFM for each BM-hMSC donor line. The consistency between donors was also improved by the use of PRIME-XV SFM, with a production range of 0.9 days compared with 19.4 days in FBS-based culture. Mean cell diameter has also been demonstrated as a process metric for BM-hMSC growth rate and senescence through a correlation (R2 = 0.8705) across all conditions. PRIME-XV SFM has also shown increased consistency in BM-hMSC characteristics such as per cell metabolite utilization, in vitro colony-forming potential and osteogenic potential despite the higher number of population doublings. Conclusions: We have increased the yield and consistency of BM-hMSC expansion between donors, demonstrating a level of control over the product, which has the potential to increase the cost-effectiveness and reduce the risk in these manufacturing processes.
Resumo:
2000 Mathematics Subject Classification: Primary 60G55; secondary 60G25.
Resumo:
The purpose of this study was to determine the efficacy of a writing process approach for the instruction of language arts with learning disabled elementary students. A nonequivalent control group design was used. The sample included 24 students with learning disabilities who were in second and third grade. All students were instructed in resource room settings for ninety minutes per day in language arts. The students in the treatment group received instruction using the writing process steps to create complete meaningful compositions on self-chosen topics. A literature-based reading program accompanied instruction in writing to provide examples of good writing and to provide a basis for topic selection. The students in the control group received instruction through the use of the county-adopted textbooks and accompanying worksheets. The teacher followed basic textbook and curriculum guide suggestions which consisted mainly of fill in the blank and matching type exercises. The treatment group consisted of 12 students: five second-graders and seven third-graders. The control group consisted of 12 students: four second-graders and eight third-graders. All students were pretested and posttested using the Woodcock-Johnson Tests of Achievement-Revised (WJ-R ACH) for writing samples and the Woodcock Reading Mastery Test (WRMT) for reading achievement. T-tests were also done to investigate the gain from pre to post for each reading or writing variable for each group separately. The results showed a highly significant difference from pretest to posttest for all writing and reading variables for both groups. Analysis of Covariance showed that the population mean posttest achievement scores for all variables adjusted for the pretest were higher for the treatment group than those for the control group.