52 resultados para Adapt ive template


Relevância:

10.00% 10.00%

Publicador:

Resumo:

La idea principal és generar un producte que faciliti a les empreses el procés de preselecció de l'ERP que potencialment es pot adaptar millor a les seves dimensions, necessitats, costos, infraestructura, etc.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

L'objectiu final serà donar un model a mode de solució introductòriaper a poder realitzar aplicacions empresarials sota l'arquitectura J2EE, queservirà com a guia o plantilla per equips de desenvolupament.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El projecte consisteix en el desenvolupament d'una eina especialitzada per a la classificació i recerca de productes. L'aplicació pot ser fàcilment ampliable per a adaptar-la a qualsevol tipus de producte o necessitat futura. Per a la programació d'aquest projecte es va aprofitar la potència i senzillesa que proporciona l'entorn .NET i els seus llenguatges orientats a objecte, en aquest cas VB.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Es comença parlant dels sistemes d'informació per a anar a parar al món dels ERP i, especialment, a un dels més importants a escala mundial, el SAP R/3. Després s'explica què són els DataWarehouses o magatzems de dades, com es creen i quines aplicacions tenen. Finalment, s'analitza i es planifica un cas pràctic: una empresa canvia d'ERP corporatiu per passar a SAP R/3 i, per tant, ha de canviar i adaptar el seu sistema EIS.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

L'objectiu d'aquest treball consisteix a elegir una eina de gestió de continguts per a l'aplicació en la gestió d'un campus virtual. L'estudi de les característiques de diversos dels CMS més emprats ens permetrà, a més, acostar-nos-hi d'una manera pràctica, atès que una part del treball inclou, després de la tria del CMS, la seva instal·lació en un servidor a Internet i la configuració pertinent per a adequar-lo als rols d'administradors, professors, estudiants, etc.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Les representacions que els estudiants es fan sobre les tasques acadèmiques són cabdals per entendre com les desenvolupen. Creiem que això no és una excepció en estudiants de doctorat amb les seves tesis i és per això que en aquesta recerca estem interessats en investigar com els estudiants entenen els estudis de doctorat. La literatura revisada preocupada per l’experiència del doctorat, és a dir, com els doctorands perceben aquest procés, se centra en variables de benestar, context d’aprenentatge i escriptura. Amb el propòsit d’obtenir un quadre complert sobre com els doctorands entenen fer una tesi, 627 doctorands han completat El Qüestionari de l’Experiència Doctoral (Lonka i altres, 2007) que hem procedit a adaptar a la població espanyola. Aquest instrument mesura les tres variables esmentades (al llarg de 49 enunciats de resposta Likert) i de forma general algunes qüestions del procés doctoral (8 preguntes de resposta oberta) que complementen/donen llum a la interpretació de les dades quantitatives. A més, es demana informació del context del doctorand (18 preguntes) que ajuda a entendre millor el desenvolupament de la tesi en cada cas. Donat que algunes dificultats que els estudiants manifesten en el doctorat tenen a veure amb la percepció de no disposar d’estratègies suficients per regular el procés d’escriptura, ens hem plantejat recollir dades més específiques en relació a l’escriptura de la tesi entrevistant 10 doctorands per separat i posteriorment junts en un focus grup. Pensem que la nostra investigació pot contribuir en la reflexió de la qualitat dels programes de doctorat ja que creiem que els estudiants tenen molt a dir i que cal escoltar les seves veus. A més, si els tutors disposen d’informació sobre com els seus alumnes viuen els estudis de doctorat, segurament entendran millor com porten a terme les seves tesis i podran oferir-los ajudes més ajustades.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hypermedia systems based on the Web for open distance education are becoming increasinglypopular as tools for user-driven access learning information. Adaptive hypermedia is a new direction in research within the area of user-adaptive systems, to increase its functionality by making it personalized [Eklu 961. This paper sketches a general agents architecture to include navigationaladaptability and user-friendly processes which would guide and accompany the student during hislher learning on the PLAN-G hypermedia system (New Generation Telematics Platform to Support Open and Distance Learning), with the aid of computer networks and specifically WWW technology [Marz 98-1] [Marz 98-2]. The PLAN-G actual prototype is successfully used with some informatics courses (the current version has no agents yet). The propased multi-agent system, contains two different types of adaptive autonomous software agents: Personal Digital Agents {Interface), to interacl directly with the student when necessary; and Information Agents (Intermediaries), to filtrate and discover information to learn and to adapt navigation space to a specific student

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Innovation is a research topic with a broad tradition. However, learning processes,from which innovations emerge, and the dynamics of change and development havetraditionally been studied in relation with the manufacturing sector. Moreover, theobjects of study have been usually process and tangible product innovations. Althoughrecently researchers have focused their attention in other sectors, more research onservice innovation should be carried out. Furthermore, regarding innovation intourism, there is a need to adapt generic theories to the tourism sector and tocontribute with new ideas.In order to find out, which are the origins of innovation processes, it is necessary tolook into two fundamental subjects that are inherent to innovation, which are learningand interaction. Both are closely related. The first appears to be an intrinsic conditionof individuals. Moreover, it can also be identified in organizations. Thus, learning allowsindividuals as well as organizations to develop. However, learning and development isnot possible without taking the environment into account. Hence, it is necessary thatinteractions take place between individuals, groups of individuals, organizations, etc.Furthermore, the concept of interaction implies the transfer of knowledge, which isthe basis for innovations.The purposes of this master thesis are to study in detail several of these topics and to develop a conceptual framework for the research on innovation in tourism

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aquesta memòria descriu com la llei 11/2007 de 22 de juny, d’accés electrònic dels ciutadans als serveis públics, és el punt de partida de la transformació de les administracions públiques, entre elles els ajuntaments, cap a l’administració electrònica. La implantació del projecte descrit s’ha portat a terme amb la perspectiva que la simplificació, normalització i homogeneïtzació dels processos administratius ha de ser el primer pas a realitzar. Posteriorment hem d’ajudar a l’ajuntament a avançar cap al seu propi model d’administració electrònica i, assolir així, els objectius de millorar la gestió interna i prestar un servei òptim i de qualitat a la ciutadania. Finalment hem d’establir un sistema que permeti realitzar el seguiment i avaluació de les actuacions vinculades a l’Administració Electrònica implantada que ens permetrà, a l’equip de treball, adequar i reajustar l’estratègia de l’ajuntament als objectius establerts, trobant eixos de millora i legitimant la presa de decisió.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Roughly fifteen years ago, the Church of Jesus Christ of Latter-day Saints published a new proposed standard file format. They call it GEDCOM. It was designed to allow different genealogy programs to exchange data.Five years later, in may 2000, appeared the GENTECH Data Modeling Project, with the support of the Federation of Genealogical Societies (FGS) and other American genealogical societies. They attempted to define a genealogical logic data model to facilitate data exchange between different genealogical programs. Although genealogists deal with an enormous variety of data sources, one of the central concepts of this data model was that all genealogical data could be broken down into a series of short, formal genealogical statements. It was something more versatile than only export/import data records on a predefined fields. This project was finally absorbed in 2004 by the National Genealogical Society (NGS).Despite being a genealogical reference in many applications, these models have serious drawbacks to adapt to different cultural and social environments. At the present time we have no formal proposal for a recognized standard to represent the family domain.Here we propose an alternative conceptual model, largely inherited from aforementioned models. The design is intended to overcome their limitations. However, its major innovation lies in applying the ontological paradigm when modeling statements and entities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Quantitative or algorithmic trading is the automatization of investments decisions obeying a fixed or dynamic sets of rules to determine trading orders. It has increasingly made its way up to 70% of the trading volume of one of the biggest financial markets such as the New York Stock Exchange (NYSE). However, there is not a signi cant amount of academic literature devoted to it due to the private nature of investment banks and hedge funds. This projects aims to review the literature and discuss the models available in a subject that publications are scarce and infrequently. We review the basic and fundamental mathematical concepts needed for modeling financial markets such as: stochastic processes, stochastic integration and basic models for prices and spreads dynamics necessary for building quantitative strategies. We also contrast these models with real market data with minutely sampling frequency from the Dow Jones Industrial Average (DJIA). Quantitative strategies try to exploit two types of behavior: trend following or mean reversion. The former is grouped in the so-called technical models and the later in the so-called pairs trading. Technical models have been discarded by financial theoreticians but we show that they can be properly cast into a well defined scientific predictor if the signal generated by them pass the test of being a Markov time. That is, we can tell if the signal has occurred or not by examining the information up to the current time; or more technically, if the event is F_t-measurable. On the other hand the concept of pairs trading or market neutral strategy is fairly simple. However it can be cast in a variety of mathematical models ranging from a method based on a simple euclidean distance, in a co-integration framework or involving stochastic differential equations such as the well-known Ornstein-Uhlenbeck mean reversal ODE and its variations. A model for forecasting any economic or financial magnitude could be properly defined with scientific rigor but it could also lack of any economical value and be considered useless from a practical point of view. This is why this project could not be complete without a backtesting of the mentioned strategies. Conducting a useful and realistic backtesting is by no means a trivial exercise since the \laws" that govern financial markets are constantly evolving in time. This is the reason because we make emphasis in the calibration process of the strategies' parameters to adapt the given market conditions. We find out that the parameters from technical models are more volatile than their counterpart form market neutral strategies and calibration must be done in a high-frequency sampling manner to constantly track the currently market situation. As a whole, the goal of this project is to provide an overview of a quantitative approach to investment reviewing basic strategies and illustrating them by means of a back-testing with real financial market data. The sources of the data used in this project are Bloomberg for intraday time series and Yahoo! for daily prices. All numeric computations and graphics used and shown in this project were implemented in MATLAB^R scratch from scratch as a part of this thesis. No other mathematical or statistical software was used.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This short paper addresses the problem of designing a QFT (quantitative feedback theory) based controllers for the vibration reduction in a 6-story building structure equipped with shear-mode magnetorheological dampers. A new methodology is proposed for characterizing the nonlinear hysteretic behavior of the MR damper through the uncertainty template in the Nichols chart. The design procedure for QFT control design is briefly presented

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Our essay aims at studying suitable statistical methods for the clustering ofcompositional data in situations where observations are constituted by trajectories ofcompositional data, that is, by sequences of composition measurements along a domain.Observed trajectories are known as “functional data” and several methods have beenproposed for their analysis.In particular, methods for clustering functional data, known as Functional ClusterAnalysis (FCA), have been applied by practitioners and scientists in many fields. To ourknowledge, FCA techniques have not been extended to cope with the problem ofclustering compositional data trajectories. In order to extend FCA techniques to theanalysis of compositional data, FCA clustering techniques have to be adapted by using asuitable compositional algebra.The present work centres on the following question: given a sample of compositionaldata trajectories, how can we formulate a segmentation procedure giving homogeneousclasses? To address this problem we follow the steps described below.First of all we adapt the well-known spline smoothing techniques in order to cope withthe smoothing of compositional data trajectories. In fact, an observed curve can bethought of as the sum of a smooth part plus some noise due to measurement errors.Spline smoothing techniques are used to isolate the smooth part of the trajectory:clustering algorithms are then applied to these smooth curves.The second step consists in building suitable metrics for measuring the dissimilaritybetween trajectories: we propose a metric that accounts for difference in both shape andlevel, and a metric accounting for differences in shape only.A simulation study is performed in order to evaluate the proposed methodologies, usingboth hierarchical and partitional clustering algorithm. The quality of the obtained resultsis assessed by means of several indices