50 resultados para Online Systems
Resumo:
Science has been developed from the rational-empirical methods, having as a consequence, the representation of existing phenomena without understanding the root causes. The question which currently has is the sense of the being, and in a simplified way, one can say that the dogmatic religion lead to misinterpretations, the empirical sciences contain the exact rational representations of phenomena. Thus, Science has been able to get rid of the dogmatic religion. The project for the sciences of being looks to return to reality its essential foundations; under the plan of theory of systems necessarily involves a search for the meaning of Reality.
Resumo:
Current economic crisis together with the Internet revolution has had direct impacts on the franchise sector of Spain: in particular on its unique communication network. The aim of this research is to analyse how Spanish franchise companies have adapted to these changes through its corporate communications management. We want to determine whether the management of communications is ideal to the growth and consolidation of companies in the market. Corporate communications plans and organizational structures were analyzed to verify whether or not information technology (i.e. the use of the Internet) is maximized: the communications aspect being a critical area of company growth. We found that most franchise companies surveyed had adapted well to the changes in information technology, despite economic challenges. The Internet as a communications tool has been limited to its utility as a “bulletin board” for information. The marketing advantage of Internet communication, or its use as an avenue for customer exchange and exchange of goods and services has yet to be maximized. Future research may look into the details of how companies are able to maximize the communications-marketing advantage that Online/Internet can contribute to the franchise sector.
Resumo:
In this paper we deal with parameterized linear inequality systems in the n-dimensional Euclidean space, whose coefficients depend continuosly on an index ranging in a compact Hausdorff space. The paper is developed in two different parametric settings: the one of only right-hand-side perturbations of the linear system, and that in which both sides of the system can be perturbed. Appealing to the backgrounds on the calmness property, and exploiting the specifics of the current linear structure, we derive different characterizations of the calmness of the feasible set mapping, and provide an operative expresion for the calmness modulus when confined to finite systems. In the paper, the role played by the Abadie constraint qualification in relation to calmness is clarified, and illustrated by different examples. We point out that this approach has the virtue of tackling the calmness property exclusively in terms of the system’s data.
Resumo:
The development of applications as well as the services for mobile systems faces a varied range of devices with very heterogeneous capabilities whose response times are difficult to predict. The research described in this work aims to respond to this issue by developing a computational model that formalizes the problem and that defines adjusting computing methods. The described proposal combines imprecise computing strategies with cloud computing paradigms in order to provide flexible implementation frameworks for embedded or mobile devices. As a result, the imprecise computation scheduling method on the workload of the embedded system is the solution to move computing to the cloud according to the priority and response time of the tasks to be executed and hereby be able to meet productivity and quality of desired services. A technique to estimate network delays and to schedule more accurately tasks is illustrated in this paper. An application example in which this technique is experimented in running contexts with heterogeneous work loading for checking the validity of the proposed model is described.
Resumo:
Purpose – The purpose of this paper is to analyse Information Systems outsourcing success, measuring the latter according to the satisfaction level achieved by users and taking into account three success factors: the role played by the client firm’s top management; the relationships between client and provider; and the degree of outsourcing. Design/methodology/approach – A survey was carried out by means of a questionnaire answered by 398 large Spanish firms. Its results were examined using the partial least squares software and through the proposal of a structural equation model. Findings – The conclusions reveal that the perceived benefits play a mediating role in outsourcing satisfaction and also that these benefits can be grouped together into three categories: strategic; economic; and technological ones. Originality/value – The study identifies how some success factors will be more influent than others depending which type of benefits are ultimately sought with outsourcing.
Resumo:
In this paper, the authors extend and generalize the methodology based on the dynamics of systems with the use of differential equations as equations of state, allowing that first order transformed functions not only apply to the primitive or original variables, but also doing so to more complex expressions derived from them, and extending the rules that determine the generation of transformed superior to zero order (variable or primitive). Also, it is demonstrated that for all models of complex reality, there exists a complex model from the syntactic and semantic point of view. The theory is exemplified with a concrete model: MARIOLA model.
Resumo:
Ideologies face two critical problems in the reality, the problem of commitment and the problem of validation. Commitment and validation are two separate phenomena, in spite of the near universal myth that the human is committed because his beliefs are valid. Ideologies not only seem external and valid but also worth whatever discomforts believing entails. In this paper the authors develop a theory of social commitment and social validation using concepts of validation of neutrosophic logic.
Resumo:
Purpose: To analyze and define the possible errors that may be introduced in keratoconus classification when the keratometric corneal power is used in such classification. Materials and methods: Retrospective study including a total of 44 keratoconus eyes. A comprehensive ophthalmologic examination was performed in all cases, which included a corneal analysis with the Pentacam system (Oculus). Classical keratometric corneal power (Pk), Gaussian corneal power (Pc Gauss), True Net Power (TNP) (Gaussian power neglecting the corneal thickness effect), and an adjusted keratometric corneal power (Pkadj) (keratometric power considering a variable keratometric index) were calculated. All cases included in the study were classified according to five different classification systems: Alió-Shabayek, Amsler-Krumeich, Rabinowitz-McDonnell, collaborative longitudinal evaluation of keratoconus (CLEK), and McMahon. Results: When Pk and Pkadj were compared, differences in the type of grading of keratoconus cases was found in 13.6% of eyes when the Alió-Shabayek or the Amsler-Krumeich systems were used. Likewise, grading differences were observed in 22.7% of eyes with the Rabinowitz-McDonnell and McMahon classification systems and in 31.8% of eyes with the CLEK classification system. All reclassified cases using Pkadj were done in a less severe stage, indicating that the use of Pk may lead to the classification of a cornea as keratoconus, being normal. In general, the results obtained using Pkadj, Pc Gauss or the TNP were equivalent. Differences between Pkadj and Pc Gauss were within ± 0.7D. Conclusion: The use of classical keratometric corneal power may lead to incorrect grading of the severity of keratoconus, with a trend to a more severe grading.
Resumo:
Decision support systems (DSS) support business or organizational decision-making activities, which require the access to information that is internally stored in databases or data warehouses, and externally in the Web accessed by Information Retrieval (IR) or Question Answering (QA) systems. Graphical interfaces to query these sources of information ease to constrain dynamically query formulation based on user selections, but they present a lack of flexibility in query formulation, since the expressivity power is reduced to the user interface design. Natural language interfaces (NLI) are expected as the optimal solution. However, especially for non-expert users, a real natural communication is the most difficult to realize effectively. In this paper, we propose an NLI that improves the interaction between the user and the DSS by means of referencing previous questions or their answers (i.e. anaphora such as the pronoun reference in “What traits are affected by them?”), or by eliding parts of the question (i.e. ellipsis such as “And to glume colour?” after the question “Tell me the QTLs related to awn colour in wheat”). Moreover, in order to overcome one of the main problems of NLIs about the difficulty to adapt an NLI to a new domain, our proposal is based on ontologies that are obtained semi-automatically from a framework that allows the integration of internal and external, structured and unstructured information. Therefore, our proposal can interface with databases, data warehouses, QA and IR systems. Because of the high NL ambiguity of the resolution process, our proposal is presented as an authoring tool that helps the user to query efficiently in natural language. Finally, our proposal is tested on a DSS case scenario about Biotechnology and Agriculture, whose knowledge base is the CEREALAB database as internal structured data, and the Web (e.g. PubMed) as external unstructured information.
Resumo:
The purpose of this paper is to identify the benefits of integrated management systems by comparing them with the benefits obtained through the individual implementation of ISO 9001 and ISO 14001 standards. The methodology used is a literature review based on an electronic search in the Web of Science, ScienceDirect, Scopus and Emerald databases. Findings show that although some benefits are common regardless the system management type, the benefits obtained with integration are greater than considering management systems separately because of the wider scope considered in integration. This is one of the first papers, to the best of our knowledge, to compare benefits from the two management systems standards when implemented separately and when integrated. In addition, some ideas are proposed for consideration in future research on the internalization of management systems and selection effect.
Resumo:
Model Hamiltonians have been, and still are, a valuable tool for investigating the electronic structure of systems for which mean field theories work poorly. This review will concentrate on the application of Pariser–Parr–Pople (PPP) and Hubbard Hamiltonians to investigate some relevant properties of polycyclic aromatic hydrocarbons (PAH) and graphene. When presenting these two Hamiltonians we will resort to second quantisation which, although not the way chosen in its original proposal of the former, is much clearer. We will not attempt to be comprehensive, but rather our objective will be to try to provide the reader with information on what kinds of problems they will encounter and what tools they will need to solve them. One of the key issues concerning model Hamiltonians that will be treated in detail is the choice of model parameters. Although model Hamiltonians reduce the complexity of the original Hamiltonian, they cannot be solved in most cases exactly. So, we shall first consider the Hartree–Fock approximation, still the only tool for handling large systems, besides density functional theory (DFT) approaches. We proceed by discussing to what extent one may exactly solve model Hamiltonians and the Lanczos approach. We shall describe the configuration interaction (CI) method, a common technology in quantum chemistry but one rarely used to solve model Hamiltonians. In particular, we propose a variant of the Lanczos method, inspired by CI, that has the novelty of using as the seed of the Lanczos process a mean field (Hartree–Fock) determinant (the method will be named LCI). Two questions of interest related to model Hamiltonians will be discussed: (i) when including long-range interactions, how crucial is including in the Hamiltonian the electronic charge that compensates ion charges? (ii) Is it possible to reduce a Hamiltonian incorporating Coulomb interactions (PPP) to an 'effective' Hamiltonian including only on-site interactions (Hubbard)? The performance of CI will be checked on small molecules. The electronic structure of azulene and fused azulene will be used to illustrate several aspects of the method. As regards graphene, several questions will be considered: (i) paramagnetic versus antiferromagnetic solutions, (ii) forbidden gap versus dot size, (iii) graphene nano-ribbons, and (iv) optical properties.
Resumo:
Despite the proliferation of academic research on information systems outsourcing, not many studies analyze the characteristics of outsourcing contracts. This research aims to provide an in-depth description of information systems outsourcing. An additional objective is to examine how these characteristics evolve over time. Finally, this study reports on the usefulness of measuring such characteristics over time to assess the maturity level of the information systems outsourcing. This study gathers the data from the responses of the information systems managers of the largest Spanish firms to a questionnaire. This longitudinal study covers 12 years of research and compares authors' previous research results with the results of this study.
Resumo:
The edges of graphene and graphene like systems can host localized states with evanescent wave function with properties radically different from those of the Dirac electrons in bulk. This happens in a variety of situations, that are reviewed here. First, zigzag edges host a set of localized non-dispersive state at the Dirac energy. At half filling, it is expected that these states are prone to ferromagnetic instability, causing a very interesting type of edge ferromagnetism. Second, graphene under the influence of external perturbations can host a variety of topological insulating phases, including the conventional quantum Hall effect, the quantum anomalous Hall (QAH) and the quantum spin Hall phase, in all of which phases conduction can only take place through topologically protected edge states. Here we provide an unified vision of the properties of all these edge states, examined under the light of the same one orbital tight-binding model. We consider the combined action of interactions, spin–orbit coupling and magnetic field, which produces a wealth of different physical phenomena. We briefly address what has been actually observed experimentally.
Resumo:
Numerical modelling methodologies are important by their application to engineering and scientific problems, because there are processes where analytical mathematical expressions cannot be obtained to model them. When the only available information is a set of experimental values for the variables that determine the state of the system, the modelling problem is equivalent to determining the hyper-surface that best fits the data. This paper presents a methodology based on the Galerkin formulation of the finite elements method to obtain representations of relationships that are defined a priori, between a set of variables: y = z(x1, x2,...., xd). These representations are generated from the values of the variables in the experimental data. The approximation, piecewise, is an element of a Sobolev space and has derivatives defined in a general sense into this space. The using of this approach results in the need of inverting a linear system with a structure that allows a fast solver algorithm. The algorithm can be used in a variety of fields, being a multidisciplinary tool. The validity of the methodology is studied considering two real applications: a problem in hydrodynamics and a problem of engineering related to fluids, heat and transport in an energy generation plant. Also a test of the predictive capacity of the methodology is performed using a cross-validation method.