892 resultados para Dimensional Modeling and Virtual Reality


Relevância:

100.00% 100.00%

Publicador:

Resumo:

I distinguish two ways that philosophers have approached and explained the reality and status of human social institutions. I call these approaches “naturalist” and “post-naturalist”. Common to both approaches is an understanding that the status of mind and its relation to the world or “nature” has implications on a conception of the status of institutional reality. Naturalists hold that mind is explicable within a scientific frame that conceives of mind as a fundamentally material process. By proxy, social reality is also materially explicable. Post-naturalists critique this view, holding instead that naturalism is parasitic on contemporary science—it therefore is non-compulsory and distorts how we ought to understand mind and social reality. A comparison of naturalism and post-naturalism will comprise the content of the first chapter. The second chapter turns to tracing out the dimensions of a post-naturalist narrative of mind and social reality. Post-naturalists conceive of mind and its activity of thought as sui generis, and it transpires from this that social institutions are better understood as a rational mind’s mode of the expression in the world. Post-naturalism conceives of social reality as a necessary dimension of thought. Thought requires a second person and thereby a tradition or context of norms that come to both structure its expression and become the products of expression. This is in contrast to the idea that social reality is a production of minds, and thereby derivative. Social reality, self-conscious thought, and thought of the second person are therefore three dimensions of a greater unity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Purpose The purpose of the study is to review recent studies published from 2007-2015 on tourism and hotel demand modeling and forecasting with a view to identifying the emerging topics and methods studied and to pointing future research directions in the field. Design/Methodology/approach Articles on tourism and hotel demand modeling and forecasting published in both science citation index (SCI) and social science citation index (SSCI) journals were identified and analyzed. Findings This review found that the studies focused on hotel demand are relatively less than those on tourism demand. It is also observed that more and more studies have moved away from the aggregate tourism demand analysis, while disaggregate markets and niche products have attracted increasing attention. Some studies have gone beyond neoclassical economic theory to seek additional explanations of the dynamics of tourism and hotel demand, such as environmental factors, tourist online behavior and consumer confidence indicators, among others. More sophisticated techniques such as nonlinear smooth transition regression, mixed-frequency modeling technique and nonparametric singular spectrum analysis have also been introduced to this research area. Research limitations/implications The main limitation of this review is that the articles included in this study only cover the English literature. Future review of this kind should also include articles published in other languages. The review provides a useful guide for researchers who are interested in future research on tourism and hotel demand modeling and forecasting. Practical implications This review provides important suggestions and recommendations for improving the efficiency of tourism and hospitality management practices. Originality/value The value of this review is that it identifies the current trends in tourism and hotel demand modeling and forecasting research and points out future research directions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work focuses on the study of the circular migration between America and Europe, particularly in the discussion about knowledge transfer and the way that social networks reconfigure the form of information distribution among people, that due to labor and academic issues have left their own country. The main purpose of this work is to study the impact of social media use in migration flows between Mexico and Spain, more specifically the use by Mexican migrants who have moved for  multiple years principally for educational purposes and then have returned to their respective locations in Mexico seeking to integrate themselves into the labor market. Our data collection concentrated exclusively on a group created on Facebook by Mexicans who mostly reside in Barcelona, Spain or wish to travel to the city for economic, educational or tourist reasons.  The results of this research show that while social networks are spaces for exchange and integration, there is a clear tendency by this group to "narrow lines" and to look back to their homeland, slowing the process of opening socially in their new context.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bulk gallium nitride (GaN) power semiconductor devices are gaining significant interest in recent years, creating the need for technology computer aided design (TCAD) simulation to accurately model and optimize these devices. This paper comprehensively reviews and compares different GaN physical models and model parameters in the literature, and discusses the appropriate selection of these models and parameters for TCAD simulation. 2-D drift-diffusion semi-classical simulation is carried out for 2.6 kV and 3.7 kV bulk GaN vertical PN diodes. The simulated forward current-voltage and reverse breakdown characteristics are in good agreement with the measurement data even over a wide temperature range.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The application of custom classification techniques and posterior probability modeling (PPM) using Worldview-2 multispectral imagery to archaeological field survey is presented in this paper. Research is focused on the identification of Neolithic felsite stone tool workshops in the North Mavine region of the Shetland Islands in Northern Scotland. Sample data from known workshops surveyed using differential GPS are used alongside known non-sites to train a linear discriminant analysis (LDA) classifier based on a combination of datasets including Worldview-2 bands, band difference ratios (BDR) and topographical derivatives. Principal components analysis is further used to test and reduce dimensionality caused by redundant datasets. Probability models were generated by LDA using principal components and tested with sites identified through geological field survey. Testing shows the prospective ability of this technique and significance between 0.05 and 0.01, and gain statistics between 0.90 and 0.94, higher than those obtained using maximum likelihood and random forest classifiers. Results suggest that this approach is best suited to relatively homogenous site types, and performs better with correlated data sources. Finally, by combining posterior probability models and least-cost analysis, a survey least-cost efficacy model is generated showing the utility of such approaches to archaeological field survey.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Il progetto di testi consiste in un caso di presentazione aziendale (FAAC), pensato attraverso l'utilizzo della Virtual Reality. Si tratta il percorso progettuale partendo dal Brief fino ad arrivare all'ambiente virtuale per la presentazione nelle fiere di un nuovo dissuasore mobile pensato per le infrastrutture.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nella seguente tesi è descritto il principio di sviluppo di una macchina industriale di alimentazione. Il suddetto sistema dovrà essere installato fra due macchine industriali. L’apparato dovrà mettere al passo e sincronizzare con la macchina a valle i prodotti che arriveranno in input. La macchina ordina gli oggetti usando una serie di nastri trasportatori a velocità regolabile.
Lo sviluppo è stato effettuato al Laboratorio Liam dopo la richiesta dell’azienda Sitma. Sitma produceva già un tipo di sistema come quello descritto in questa tesi. Il deisderio di Sitma è quindi quello di modernizzare la precedente applicazione poiché il dispositivo che le permetteva di effettuare la messa al passo di prodotti era un PLC Siemens che non è più commercializzato. La tesi verterà sullo studio dell’applicazione e la modellazione tramite Matlab-Simulink per poi proseguire ad una applicazione, seppure non risolutiva, in TwinCAT 3.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work provides a holistic investigation into the realm of feature modeling within software product lines. The work presented identifies limitations and challenges within the current feature modeling approaches. Those limitations include, but not limited to, the dearth of satisfactory cognitive presentation, inconveniency in scalable systems, inflexibility in adapting changes, nonexistence of predictability of models behavior, as well as the lack of probabilistic quantification of model’s implications and decision support for reasoning under uncertainty. The work in this thesis addresses these challenges by proposing a series of solutions. The first solution is the construction of a Bayesian Belief Feature Model, which is a novel modeling approach capable of quantifying the uncertainty measures in model parameters by a means of incorporating probabilistic modeling with a conventional modeling approach. The Bayesian Belief feature model presents a new enhanced feature modeling approach in terms of truth quantification and visual expressiveness. The second solution takes into consideration the unclear support for the reasoning under the uncertainty process, and the challenging constraint satisfaction problem in software product lines. This has been done through the development of a mathematical reasoner, which was designed to satisfy the model constraints by considering probability weight for all involved parameters and quantify the actual implications of the problem constraints. The developed Uncertain Constraint Satisfaction Problem approach has been tested and validated through a set of designated experiments. Profoundly stating, the main contributions of this thesis include the following: • Develop a framework for probabilistic graphical modeling to build the purported Bayesian belief feature model. • Extend the model to enhance visual expressiveness throughout the integration of colour degree variation; in which the colour varies with respect to the predefined probabilistic weights. • Enhance the constraints satisfaction problem by the uncertainty measuring of the parameters truth assumption. • Validate the developed approach against different experimental settings to determine its functionality and performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When something unfamiliar emerges or when something familiar does something unexpected people need to make sense of what is emerging or going on in order to act. Social representations theory suggests how individuals and society make sense of the unfamiliar and hence how the resultant social representations (SRs) cognitively, emotionally, and actively orient people and enable communication. SRs are social constructions that emerge through individual and collective engagement with media and with everyday conversations among people. Recent developments in text analysis techniques, and in particular topic modeling, provide a potentially powerful analytical method to examine the structure and content of SRs using large samples of narrative or text. In this paper I describe the methods and results of applying topic modeling to 660 micronarratives collected from Australian academics / researchers, government employees, and members of the public in 2010-2011. The narrative fragments focused on adaptation to climate change (CC) and hence provide an example of Australian society making sense of an emerging and conflict ridden phenomena. The results of the topic modeling reflect elements of SRs of adaptation to CC that are consistent with findings in the literature as well as being reasonably robust predictors of classes of action in response to CC. Bayesian Network (BN) modeling was used to identify relationships among the topics (SR elements) and in particular to identify relationships among topics, sentiment, and action. Finally the resulting model and topic modeling results are used to highlight differences in the salience of SR elements among social groups. The approach of linking topic modeling and BN modeling offers a new and encouraging approach to analysis for ongoing research on SRs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Communicating thoughts, facts and narratives through visual devices such as allegory or symbolism was fundamental to early map making and this remains the case with contemporary illustration. Drawing was employed then as a way of describing historic narratives (fact and folklore) through the convenience of a drawn symbol or character. The map creators were visionaries, depicting known discoveries and anticipating what existed beyond the agreed boundaries. As we now have photographic and virtual reality maps at our disposal, how can illustration develop the language of what a map is and can be? How can we break the rules of map design and yet still communicate the idea of a sense of place with the aim to inform, excite and/or educate the ‘traveller’? As Illustrators we need to question the purpose of creating a ‘map’: what do we want to communicate and is representational image making the only way to present information of a location? Is creating a more personal interpretation a form of cartouche, reminiscent of elements within the Hereford Mappa Mundi and maps of Blaeu, and can this improve/hinder the communicative aspect of the map? Looking at a variety of historical and contemporary illustrated maps and artists (such as Grayson Perry), who track their journeys through drawing, both conventional journeys and emotional, I will aim to prove that the illustrated map is not mere decoration but is a visual language providing an allegorical response to tangible places and personal feelings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation contains four essays that all share a common purpose: developing new methodologies to exploit the potential of high-frequency data for the measurement, modeling and forecasting of financial assets volatility and correlations. The first two chapters provide useful tools for univariate applications while the last two chapters develop multivariate methodologies. In chapter 1, we introduce a new class of univariate volatility models named FloGARCH models. FloGARCH models provide a parsimonious joint model for low frequency returns and realized measures, and are sufficiently flexible to capture long memory as well as asymmetries related to leverage effects. We analyze the performances of the models in a realistic numerical study and on the basis of a data set composed of 65 equities. Using more than 10 years of high-frequency transactions, we document significant statistical gains related to the FloGARCH models in terms of in-sample fit, out-of-sample fit and forecasting accuracy compared to classical and Realized GARCH models. In chapter 2, using 12 years of high-frequency transactions for 55 U.S. stocks, we argue that combining low-frequency exogenous economic indicators with high-frequency financial data improves the ability of conditionally heteroskedastic models to forecast the volatility of returns, their full multi-step ahead conditional distribution and the multi-period Value-at-Risk. Using a refined version of the Realized LGARCH model allowing for time-varying intercept and implemented with realized kernels, we document that nominal corporate profits and term spreads have strong long-run predictive ability and generate accurate risk measures forecasts over long-horizon. The results are based on several loss functions and tests, including the Model Confidence Set. Chapter 3 is a joint work with David Veredas. We study the class of disentangled realized estimators for the integrated covariance matrix of Brownian semimartingales with finite activity jumps. These estimators separate correlations and volatilities. We analyze different combinations of quantile- and median-based realized volatilities, and four estimators of realized correlations with three synchronization schemes. Their finite sample properties are studied under four data generating processes, in presence, or not, of microstructure noise, and under synchronous and asynchronous trading. The main finding is that the pre-averaged version of disentangled estimators based on Gaussian ranks (for the correlations) and median deviations (for the volatilities) provide a precise, computationally efficient, and easy alternative to measure integrated covariances on the basis of noisy and asynchronous prices. Along these lines, a minimum variance portfolio application shows the superiority of this disentangled realized estimator in terms of numerous performance metrics. Chapter 4 is co-authored with Niels S. Hansen, Asger Lunde and Kasper V. Olesen, all affiliated with CREATES at Aarhus University. We propose to use the Realized Beta GARCH model to exploit the potential of high-frequency data in commodity markets. The model produces high quality forecasts of pairwise correlations between commodities which can be used to construct a composite covariance matrix. We evaluate the quality of this matrix in a portfolio context and compare it to models used in the industry. We demonstrate significant economic gains in a realistic setting including short selling constraints and transaction costs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Americans are accustomed to a wide range of data collection in their lives: census, polls, surveys, user registrations, and disclosure forms. When logging onto the Internet, users’ actions are being tracked everywhere: clicking, typing, tapping, swiping, searching, and placing orders. All of this data is stored to create data-driven profiles of each user. Social network sites, furthermore, set the voluntarily sharing of personal data as the default mode of engagement. But people’s time and energy devoted to creating this massive amount of data, on paper and online, are taken for granted. Few people would consider their time and energy spent on data production as labor. Even if some people do acknowledge their labor for data, they believe it is accessory to the activities at hand. In the face of pervasive data collection and the rising time spent on screens, why do people keep ignoring their labor for data? How has labor for data been become invisible, as something that is disregarded by many users? What does invisible labor for data imply for everyday cultural practices in the United States? Invisible Labor for Data addresses these questions. I argue that three intertwined forces contribute to framing data production as being void of labor: data production institutions throughout history, the Internet’s technological infrastructure (especially with the implementation of algorithms), and the multiplication of virtual spaces. There is a common tendency in the framework of human interactions with computers to deprive data and bodies of their materiality. My Introduction and Chapter 1 offer theoretical interventions by reinstating embodied materiality and redefining labor for data as an ongoing process. The middle Chapters present case studies explaining how labor for data is pushed to the margin of the narratives about data production. I focus on a nationwide debate in the 1960s on whether the U.S. should build a databank, contemporary Big Data practices in the data broker and the Internet industries, and the group of people who are hired to produce data for other people’s avatars in the virtual games. I conclude with a discussion on how the new development of crowdsourcing projects may usher in the new chapter in exploiting invisible and discounted labor for data.