950 resultados para comprehensive model
Resumo:
Using survey data from 358 online customers, the study finds that the e-service quality construct conforms to the structure of a third-order factor model that links online service quality perceptions to distinct and actionable dimensions, including (1) website design, (2) fulfilment, (3) customer service, and (4) security/privacy. Each dimension is found to consist of several attributes that define the basis of e-service quality perceptions. A comprehensive specification of the construct, which includes attributes not covered in existing scales, is developed. The study contrasts a formative model consisting of 4 dimensions and 16 attributes against a reflective conceptualization. The results of this comparison indicate that studies using an incorrectly specified model overestimate the importance of certain e-service quality attributes. Global fit criteria are also found to support the detection of measurement misspecification. Meta-analytic data from 31,264 online customers are used to show that the developed measurement predicts customer behavior better than widely used scales, such as WebQual and E-S-Qual. The results show that the new measurement enables managers to assess e-service quality more accurately and predict customer behavior more reliably.
Resumo:
Prior research has established that idiosyncratic volatility of the securities prices exhibits a positive trend. This trend and other factors have made the merits of investment diversification and portfolio construction more compelling. A new optimization technique, a greedy algorithm, is proposed to optimize the weights of assets in a portfolio. The main benefits of using this algorithm are to: a) increase the efficiency of the portfolio optimization process, b) implement large-scale optimizations, and c) improve the resulting optimal weights. In addition, the technique utilizes a novel approach in the construction of a time-varying covariance matrix. This involves the application of a modified integrated dynamic conditional correlation GARCH (IDCC - GARCH) model to account for the dynamics of the conditional covariance matrices that are employed. The stochastic aspects of the expected return of the securities are integrated into the technique through Monte Carlo simulations. Instead of representing the expected returns as deterministic values, they are assigned simulated values based on their historical measures. The time-series of the securities are fitted into a probability distribution that matches the time-series characteristics using the Anderson-Darling goodness-of-fit criterion. Simulated and actual data sets are used to further generalize the results. Employing the S&P500 securities as the base, 2000 simulated data sets are created using Monte Carlo simulation. In addition, the Russell 1000 securities are used to generate 50 sample data sets. The results indicate an increase in risk-return performance. Choosing the Value-at-Risk (VaR) as the criterion and the Crystal Ball portfolio optimizer, a commercial product currently available on the market, as the comparison for benchmarking, the new greedy technique clearly outperforms others using a sample of the S&P500 and the Russell 1000 securities. The resulting improvements in performance are consistent among five securities selection methods (maximum, minimum, random, absolute minimum, and absolute maximum) and three covariance structures (unconditional, orthogonal GARCH, and integrated dynamic conditional GARCH).
Resumo:
This letter presents novel behaviour-based tracking of people in low-resolution using instantaneous priors mediated by head-pose. We extend the Kalman Filter to adaptively combine motion information with an instantaneous prior belief about where the person will go based on where they are currently looking. We apply this new method to pedestrian surveillance, using automatically-derived head pose estimates, although the theory is not limited to head-pose priors. We perform a statistical analysis of pedestrian gazing behaviour and demonstrate tracking performance on a set of simulated and real pedestrian observations. We show that by using instantaneous `intentional' priors our algorithm significantly outperforms a standard Kalman Filter on comprehensive test data.
Resumo:
Li-ion batteries have been widely used in electric vehicles, and battery internal state estimation plays an important role in the battery management system. However, it is technically challenging, in particular, for the estimation of the battery internal temperature and state-ofcharge (SOC), which are two key state variables affecting the battery performance. In this paper, a novel method is proposed for realtime simultaneous estimation of these two internal states, thus leading to a significantly improved battery model for realtime SOC estimation. To achieve this, a simplified battery thermoelectric model is firstly built, which couples a thermal submodel and an electrical submodel. The interactions between the battery thermal and electrical behaviours are captured, thus offering a comprehensive description of the battery thermal and electrical behaviour. To achieve more accurate internal state estimations, the model is trained by the simulation error minimization method, and model parameters are optimized by a hybrid optimization method combining a meta-heuristic algorithm and the least square approach. Further, timevarying model parameters under different heat dissipation conditions are considered, and a joint extended Kalman filter is used to simultaneously estimate both the battery internal states and time-varying model parameters in realtime. Experimental results based on the testing data of LiFePO4 batteries confirm the efficacy of the proposed method.
Resumo:
Summary: This is a conceptual paper that aims to identify the key perspectives on business model innovation. Understanding the theoretical and conceptual underpinnings of business model innovation is crucial in facilitating organisation in reinventing their business models. Through a comprehensive literature review, three perspectives are identified. Business model innovation is a complex construct. There is no single approach or method in undertaking business model innovation. Successful undertaking of business model innovation depends on a number of factors. Dynamic capabilities and internal capabilities are just two of many important factors.
Resumo:
As part of a long-term project aimed at designing classroom interventions to motivate language learners, we have searched for a motivation model that could serve as a theoretical basis for the methodological applications. We have found that none of the existing models we considered were entirely adequate for our purpose for three reasons: (1) they did not provide a sufficiently comprehensive and detailed summary of all the relevant motivational influences on classroom behaviour; (2) they tended to focus on how and why people choose certain courses of action, while ignoring or playing down the importance of motivational sources of executing goal-directed behaviour; and (3) they did not do justice to the fact that motivation is not static but dynamically evolving and changing in time, making it necessary for motivation constructs to contain a featured temporal axis. Consequently, partly inspired by Heckhausen and Kuhl's 'Action Control Theory', we have developed a new 'Process Model of L2 Motivation', which is intended both to account for the dynamics of motivational change in time and to synthesise many of the most important motivational conceptualisations to date. In this paper we describe the main components of this model, also listing a number of its limitations which need to be resolved in future research.
Resumo:
Thermal characterizations of high power light emitting diodes (LEDs) and laser diodes (LDs) are one of the most critical issues to achieve optimal performance such as center wavelength, spectrum, power efficiency, and reliability. Unique electrical/optical/thermal characterizations are proposed to analyze the complex thermal issues of high power LEDs and LDs. First, an advanced inverse approach, based on the transient junction temperature behavior, is proposed and implemented to quantify the resistance of the die-attach thermal interface (DTI) in high power LEDs. A hybrid analytical/numerical model is utilized to determine an approximate transient junction temperature behavior, which is governed predominantly by the resistance of the DTI. Then, an accurate value of the resistance of the DTI is determined inversely from the experimental data over the predetermined transient time domain using numerical modeling. Secondly, the effect of junction temperature on heat dissipation of high power LEDs is investigated. The theoretical aspect of junction temperature dependency of two major parameters – the forward voltage and the radiant flux – on heat dissipation is reviewed. Actual measurements of the heat dissipation over a wide range of junction temperatures are followed to quantify the effect of the parameters using commercially available LEDs. An empirical model of heat dissipation is proposed for applications in practice. Finally, a hybrid experimental/numerical method is proposed to predict the junction temperature distribution of a high power LD bar. A commercial water-cooled LD bar is used to present the proposed method. A unique experimental setup is developed and implemented to measure the average junction temperatures of the LD bar. After measuring the heat dissipation of the LD bar, the effective heat transfer coefficient of the cooling system is determined inversely. The characterized properties are used to predict the junction temperature distribution over the LD bar under high operating currents. The results are presented in conjunction with the wall-plug efficiency and the center wavelength shift.
Resumo:
Matrix factorization (MF) has evolved as one of the better practice to handle sparse data in field of recommender systems. Funk singular value decomposition (SVD) is a variant of MF that exists as state-of-the-art method that enabled winning the Netflix prize competition. The method is widely used with modifications in present day research in field of recommender systems. With the potential of data points to grow at very high velocity, it is prudent to devise newer methods that can handle such data accurately as well as efficiently than Funk-SVD in the context of recommender system. In view of the growing data points, I propose a latent factor model that caters to both accuracy and efficiency by reducing the number of latent features of either users or items making it less complex than Funk-SVD, where latent features of both users and items are equal and often larger. A comprehensive empirical evaluation of accuracy on two publicly available, amazon and ml-100 k datasets reveals the comparable accuracy and lesser complexity of proposed methods than Funk-SVD.
Resumo:
A number of laws in Canada which uphold rights are referred to as quasi-constitutional by the courts in recognition of their special importance. Quasi-constitutional statutes are enacted through the regular legislative process, although they are being interpreted and applied in a fashion which has become remarkably similar to constitutional law, and are therefore having an important affect over other legislation. Quasi-constitutionality has surprisingly received limited scholarly attention, and very few serious attempts at explaining its significance have been made. This dissertation undertakes a comprehensive study of quasi-constitutionality which considers its theoretical basis, its interpretation and legal significance, as well as its similarities to comparable forms of law in other Commonwealth jurisdictions. Part I examines the theoretical basis of quasi-constitutionality and its relationship to the Constitution. As a statutory and common law form of fundamental law, quasi-constitutionality is shown to signify an association with the Canadian Constitution and the foundational principles that underpin it. Part II proceeds to consider the special rules of interpretation applied to quasi-constitutional legislation, the basis of this interpretative approach, and the connection between the interpretation of similar provisions in quasi-constitutional legislation and the Constitution. As a statutory form of fundamental law, quasi-constitutional legislation is given a broad, liberal and purposive interpretation which significantly expands the rights which they protect. The theoretical basis of this approach is found in both the fundamental nature of the rights upheld by quasi-constitutional legislation as well as legislative intent. Part III explores how quasi-constitutional statutes affect the interpretation of regular legislation and how they are used for the purposes of judicial review. Quasi-constitutional legislation has a significant influence over regular statutes in the interpretative exercise, which in some instances results in conflicting statutes being declared inoperable. The basis of this form of judicial review is demonstrated to be rooted in statutory interpretation, and as such it provides an interesting model of rights protection and judicial review that is not conflated to constitutional and judicial supremacy.
Resumo:
Obesity affects the functional capability of adipose-derived stem cells (ASCs) and their effective use in regenerative medicine through mechanisms still poorly understood. Here we employed a multiplatform (LC/MS, CE/MS, GC/MS) metabolomics untargeted approach to investigate the metabolic alteration underlying the inequalities observed in obese-derived ASCs. The metabolic fingerprint (metabolites within the cells) and footprint (metabolites secreted in the culture medium) from humans or mice, obese and non-obese derived ASCs, were characterized by providing valuable information. Metabolites associated to glycolysis, TCA, pentose phosphate pathway and polyol pathway were increased in the footprint of obese-derived human ASCs indicating alterations in the carbohydrate metabolism; whereas from the murine model, deep differences in lipid and amino acid catabolism were highlighted. Therefore, new insights on the ASCs metabolome were provided that enhance our understanding of the processes underlying the ASCs stemness capacity and its relationship with obesity, in different cell models.
Resumo:
This paper presents a best-practice model for the redesign of virtual learning environments (VLEs) within creative arts to augment blended learning. In considering a blended learning best-practice model, three factors should be considered: the conscious and active human intervention, good learning design and pedagogical input, and the sensitive handling of the process by trained professionals. This study is based on a comprehensive VLE content analysis conducted across two academic schools within the creative arts at one Post-92 higher education (HE) institution. It was found that four main barriers affect the use of the VLE within creative arts: lack of flexibility in relation to navigation and interface, time in developing resources, competency level of tutors (confidence in developing online resources balanced against other flexible open resources) and factors affecting the engagement of ‘digital residents’. The experimental approach adopted in this study involved a partnership between the learning technology advisor and academic staff, which resulted in a VLE best-practice model that focused directly on improving aesthetics and navigation. The approach adopted in this study allowed a purposive sample of academic staff to engage as participants, stepping back cognitively from their routine practices in relation to their use of the VLE and questioning approaches to how they embed the VLE to support teaching and learning. The model presented in this paper identified a potential solution to overcome the challenges of integrating the VLE within creative arts. The findings of this study demonstrate positive impact on staff and student experience and provide a sustainable model of good practice for the redesign of the VLE within creative disciplines.
Resumo:
Gastrointestinal stromal tumors (GIST) are the most common di tumors of the gastrointestinal tract, arising from the interstitial cells of Cajal (ICCs) or their precursors. The vast majority of GISTs (75–85% of GIST) harbor KIT or PDGFRA mutations. A small percentage of GIST (about 10‐15%) do not harbor any of these driver mutations and have historically been called wild-type (WT). Among them, from 20% to 40% show loss of function of the succinate dehydrogenase complex (SDH), also defined as SDH‐deficient GIST. SDH-deficient GISTs display distinctive clinical and pathological features, and can be sporadic or associated with Carney triad or Carney-Stratakis syndrome. These tumors arise most frequently in the stomach with predilection to distal stomach and antrum, have a multi-nodular growth, display a histological epithelioid phenotype, and present frequent lympho-vascular invasion. Occurrence of lymph node metastases and indolent course are representative features of SDH-deficient GISTs. This subset of GIST is known for the immunohistochemical loss of succinate dehydrogenase subunit B (SDHB), which signals the loss of function of the entire SDH-complex. The overall aim of my PhD project consists of the comprehensive characterization of SDH deficient GIST. Throughout the project, clinical, molecular and cellular characterizations were performed using next-generation sequencing technologies (NGS), that has the potential to allow the identification of molecular patterns useful for the diagnosis and development of novel treatments. Moreover, while there are many different cell lines and preclinical models of KIT/PDGFRA mutant GIST, no reliable cell model of SDH-deficient GIST has currently been developed, which could be used for studies on tumor evolution and in vitro assessments of drug response. Therefore, another aim of this project was to develop a pre-clinical model of SDH deficient GIST using the novel technology of induced pluripotent stem cells (iPSC).
Resumo:
In this doctoral dissertation, a comprehensive methodological approach for the assessment of river embankments safety conditions, based on the integrated use of laboratory testing, physical modelling and finite element (FE) numerical simulations, is proposed, with the aim of contributing to a better understanding of the effect of time-dependent hydraulic boundary conditions on the hydro-mechanical response of river embankments. The case study and materials selected for the present research project are representative for the riverbank systems of Alpine and Apennine tributaries of the main river Po (Northern Italy), which have recently experienced various sudden overall collapses. The outcomes of a centrifuge test carried out under the enhanced gravity field of 50-g, on a riverbank model, made of a compacted silty sand mixture, overlying a homogeneous clayey silt foundation layer and subjected to a simulated flood event, have been considered for the definition of a robust and realistic experimental benchmark. In order to reproduce the observed experimental behaviour, a first set of numerical simulations has been carried out by assuming, for both the embankments and the foundation unit, rigid soil porous media, under partially saturated conditions. Mechanical and hydraulic soil properties adopted in the numerical analyses have been carefully estimated based on standard saturated triaxial, oedometer and constant head permeability tests. Afterwards, advanced suction-controlled laboratory tests, have been carried out to investigate the effect of suction and confining stresses on the shear strength and compressibility characteristics of the filling material and a second set of numerical simulations has been run, taking into account the soil parameters updated based on the most recent tests. The final aim of the study is the quantitative estimation of the predictive capabilities of the calibrated numerical tools, by systematically comparing the results of the FE simulations to the experimental benchmark.
Resumo:
Deep Learning architectures give brilliant results in a large variety of fields, but a comprehensive theoretical description of their inner functioning is still lacking. In this work, we try to understand the behavior of neural networks by modelling in the frameworks of Thermodynamics and Condensed Matter Physics. We approach neural networks as in a real laboratory and we measure the frequency spectrum and the entropy of the weights of the trained model. The stochasticity of the training occupies a central role in the dynamics of the weights and makes it difficult to assimilate neural networks to simple physical systems. However, the analogy with Thermodynamics and the introduction of a well defined temperature leads us to an interesting result: if we eliminate from a CNN the "hottest" filters, the performance of the model remains the same, whereas, if we eliminate the "coldest" ones, the performance gets drastically worst. This result could be exploited in the realization of a training loop which eliminates the filters that do not contribute to loss reduction. In this way, the computational cost of the training will be lightened and more importantly this would be done by following a physical model. In any case, beside important practical applications, our analysis proves that a new and improved modeling of Deep Learning systems can pave the way to new and more efficient algorithms.
Resumo:
There are many deformable objects such as papers, clothes, ropes in a person’s living space. To have a robot working in automating the daily tasks it is important that the robot works with these deformable objects. Manipulation of deformable objects is a challenging task for robots because these objects have an infinite-dimensional configuration space and are expensive to model, making real-time monitoring, planning and control difficult. It forms a particularly important field of robotics with relevant applications in different sectors such as medicine, food handling, manufacturing, and household chores. In this report, there is a clear review of the approaches used and are currently in use along with future developments to achieve this task. My research is more focused on the last 10 years, where I have systematically reviewed many articles to have a clear understanding of developments in this field. The main contribution is to show the whole landscape of this concept and provide a broad view of how it has evolved. I also explained my research methodology by following my analysis from the past to the present along with my thoughts for the future.