813 resultados para Computer and Video Games
Resumo:
Various digital watermarking (WM) techniques for still imaging have been studied in the last several years. Recently, many new WM schemes have been proposed for other types of digital multimedia data, such as text, audio and video. This paper presents a brief overview of existing digital video WM. We classify WM techniques and discuss the properties of video WM. Since each WM application has its own specific requirements, WM design must take the intended application into consideration. Video WM applications are also discussed in the paper. The features of video WM implementations in software and hardware and their differences are presented through the description of four examples of existing work.
Resumo:
In the world, scientific studies increase day by day and computer programs facilitate the human’s life. Scientists examine the human’s brain’s neural structure and they try to be model in the computer and they give the name of artificial neural network. For this reason, they think to develop more complex problem’s solution. The purpose of this study is to estimate fuel economy of an automobile engine by using artificial neural network (ANN) algorithm. Engine characteristics were simulated by using “Neuro Solution” software. The same data is used in MATLAB to compare the performance of MATLAB is such a problem and show its validity. The cylinder, displacement, power, weight, acceleration and vehicle production year are used as input data and miles per gallon (MPG) are used as target data. An Artificial Neural Network model was developed and 70% of data were used as training data, 15% of data were used as testing data and 15% of data is used as validation data. In creating our model, proper neuron number is carefully selected to increase the speed of the network. Since the problem has a nonlinear structure, multi layer are used in our model.
Resumo:
Computing and information technology have made significant advances. The use of computing and technology is a major aspect of our lives, and this use will only continue to increase in our lifetime. Electronic digital computers and high performance communication networks are central to contemporary information technology. The computing applications in a wide range of areas including business, communications, medical research, transportation, entertainments, and education are transforming local and global societies around the globe. The rapid changes in the fields of computing and information technology also make the study of ethics exciting and challenging, as nearly every day, the media report on a new invention, controversy, or court ruling. This tutorial will explore a broad overview on the scientific foundations, technological advances, social implications, and ethical and legal issues related to computing. It will provide the milestones in computing and in networking, social context of computing, professional and ethical responsibilities, philosophical frameworks, and social, ethical, historical, and political implications of computer and information technology. It will outline the impact of the tremendous growth of computer and information technology on people, ethics and law. Political and legal implications will become clear when we analyze how technology has outpaced the legal and political arenas.
Resumo:
Presenting qualitative and quantitative findings on the lived experiences of around seven hundred young adults from Christian, Muslim, Jewish, Hindu, Buddhist, Sikh and mixed-faith backgrounds, Religious and Sexual Identities provides an illuminating and nuanced analysis of young adults' perceptions and negotiations of their religious, sexual, youth and gender identities. It demonstrates how these young adults creatively construct meanings and social connections as they navigate demanding but exciting spaces in which their multiple identities intersect. Accessible quantitative analyses are combined with rich interview and video diary narratives in this theoretically-informed exploration of religious and sexual identities in contemporary society. © Andrew Kam-Tuck Yip and Sarah-Jane Page 2013. All rights reserved.
Resumo:
'Takes the challenging and makes it understandable. The book contains useful advice on the application of statistics to a variety of contexts and shows how statistics can be used by managers in their work.' - Dr Terri Byers, Assistant Professor, University Of New Brunswick, Canada A book about introductory quantitative analysis for business students designed to be read by first- and second-year students on a business studies degree course that assumes little or no background in mathematics or statistics. Based on extensive knowledge and experience in how people learn and in particular how people learn mathematics, the authors show both how and why quantitative analysis is useful in the context of business and management studies, encouraging readers to not only memorise the content but to apply learning to typical problems. Fully up-to-date with comprehensive coverage of IBM SPSS and Microsoft Excel software, the tailored examples illustrate how the programmes can be used, and include step-by-step figures and tables throughout. A range of ‘real world’ and fictional examples, including "The Ballad of Eddie the Easily Distracted" and "Esha's Story" help bring the study of statistics alive. A number of in-text boxouts can be found throughout the book aimed at readers at varying levels of study and understanding •Back to Basics for those struggling to understand, explain concepts in the most basic way possible - often relating to interesting or humorous examples •Above and Beyond for those racing ahead and who want to be introduced to more interesting or advanced concepts that are a little bit outside of what they may need to know •Think it over get students to stop, engage and reflect upon the different connections between topics A range of online resources including a set of data files and templates for the reader following in-text examples, downloadable worksheets and instructor materials, answers to in-text exercises and video content compliment the book.
Resumo:
The convergence of data, audio and video on IP networks is changing the way individuals, groups and organizations communicate. This diversity of communication media presents opportunities for creating synergistic collaborative communications. This form of collaborative communication is however not without its challenges. The increasing number of communication service providers coupled with a combinatorial mix of offered services, varying Quality-of-Service and oscillating pricing of services increases the complexity for the user to manage and maintain ‘always best’ priced or performance services. Consumers have to manually manage and adapt their communication in line with differences in services across devices, networks and media while ensuring that the usage remain consistent with their intended goals. This dissertation proposes a novel user-centric approach to address this problem. The proposed approach aims to reduce the aforementioned complexity to the user by (1) providing high-level abstractions and a policy based methodology for automated selection of the communication services guided by high-level user policies and (2) providing services through the seamless integration of multiple communication service providers and providing an extensible framework to support the integration of multiple communication service providers. The approach was implemented in the Communication Virtual Machine (CVM), a model-driven technology for realizing communication applications. The CVM includes the Network Communication Broker, the layer responsible for providing a network-independent API to the upper layers of CVM. The initial prototype for the NCB supported only a single communication framework which limited the number, quality and types of services available. Experimental evaluation of the approach show the additional overhead of the approach is minimal compared to the individual communication services frameworks. Additionally the automated approach proposed out performed the individual communication services frameworks for cross framework switching.
Resumo:
The move from Standard Definition (SD) to High Definition (HD) represents a six times increases in data, which needs to be processed. With expanding resolutions and evolving compression, there is a need for high performance with flexible architectures to allow for quick upgrade ability. The technology advances in image display resolutions, advanced compression techniques, and video intelligence. Software implementation of these systems can attain accuracy with tradeoffs among processing performance (to achieve specified frame rates, working on large image data sets), power and cost constraints. There is a need for new architectures to be in pace with the fast innovations in video and imaging. It contains dedicated hardware implementation of the pixel and frame rate processes on Field Programmable Gate Array (FPGA) to achieve the real-time performance. ^ The following outlines the contributions of the dissertation. (1) We develop a target detection system by applying a novel running average mean threshold (RAMT) approach to globalize the threshold required for background subtraction. This approach adapts the threshold automatically to different environments (indoor and outdoor) and different targets (humans and vehicles). For low power consumption and better performance, we design the complete system on FPGA. (2) We introduce a safe distance factor and develop an algorithm for occlusion occurrence detection during target tracking. A novel mean-threshold is calculated by motion-position analysis. (3) A new strategy for gesture recognition is developed using Combinational Neural Networks (CNN) based on a tree structure. Analysis of the method is done on American Sign Language (ASL) gestures. We introduce novel point of interests approach to reduce the feature vector size and gradient threshold approach for accurate classification. (4) We design a gesture recognition system using a hardware/ software co-simulation neural network for high speed and low memory storage requirements provided by the FPGA. We develop an innovative maximum distant algorithm which uses only 0.39% of the image as the feature vector to train and test the system design. Database set gestures involved in different applications may vary. Therefore, it is highly essential to keep the feature vector as low as possible while maintaining the same accuracy and performance^
Resumo:
Virtual machines (VMs) are powerful platforms for building agile datacenters and emerging cloud systems. However, resource management for a VM-based system is still a challenging task. First, the complexity of application workloads as well as the interference among competing workloads makes it difficult to understand their VMs’ resource demands for meeting their Quality of Service (QoS) targets; Second, the dynamics in the applications and system makes it also difficult to maintain the desired QoS target while the environment changes; Third, the transparency of virtualization presents a hurdle for guest-layer application and host-layer VM scheduler to cooperate and improve application QoS and system efficiency. This dissertation proposes to address the above challenges through fuzzy modeling and control theory based VM resource management. First, a fuzzy-logic-based nonlinear modeling approach is proposed to accurately capture a VM’s complex demands of multiple types of resources automatically online based on the observed workload and resource usages. Second, to enable fast adaption for resource management, the fuzzy modeling approach is integrated with a predictive-control-based controller to form a new Fuzzy Modeling Predictive Control (FMPC) approach which can quickly track the applications’ QoS targets and optimize the resource allocations under dynamic changes in the system. Finally, to address the limitations of black-box-based resource management solutions, a cross-layer optimization approach is proposed to enable cooperation between a VM’s host and guest layers and further improve the application QoS and resource usage efficiency. The above proposed approaches are prototyped and evaluated on a Xen-based virtualized system and evaluated with representative benchmarks including TPC-H, RUBiS, and TerraFly. The results demonstrate that the fuzzy-modeling-based approach improves the accuracy in resource prediction by up to 31.4% compared to conventional regression approaches. The FMPC approach substantially outperforms the traditional linear-model-based predictive control approach in meeting application QoS targets for an oversubscribed system. It is able to manage dynamic VM resource allocations and migrations for over 100 concurrent VMs across multiple hosts with good efficiency. Finally, the cross-layer optimization approach further improves the performance of a virtualized application by up to 40% when the resources are contended by dynamic workloads.
Resumo:
A nuclear waste stream is the complete flow of waste material from origin to treatment facility to final disposal. The objective of this study was to design and develop a Geographic Information Systems (GIS) module using Google Application Programming Interface (API) for better visualization of nuclear waste streams that will identify and display various nuclear waste stream parameters. A proper display of parameters would enable managers at Department of Energy waste sites to visualize information for proper planning of waste transport. The study also developed an algorithm using quadratic Bézier curve to make the map more understandable and usable. Microsoft Visual Studio 2012 and Microsoft SQL Server 2012 were used for the implementation of the project. The study has shown that the combination of several technologies can successfully provide dynamic mapping functionality. Future work should explore various Google Maps API functionalities to further enhance the visualization of nuclear waste streams.
Resumo:
Present theories of deep-sea community organization recognize the importance of small-scale biological disturbances, originated partly from the activities of epibenthic megafaunal organisms, in maintaining high benthic biodiversity in the deep sea. However, due to technical difficulties, in situ experimental studies to test hypotheses in the deep sea are lacking. The objective of the present study was to evaluate the potential of cages as tools for studying the importance of epibenthic megafauna for deep-sea benthic communities. Using the deep-diving Remotely Operated Vehicle (ROV) "VICTOR 6000", six experimental cages were deployed at the sea floor at 2500 m water depth and sampled after 2 years (2y) and 4 years (4y) for a variety of sediment parameters in order to test for caging artefacts. Photo and video footage from both experiments showed that the cages were efficient at excluding the targeted fauna. The cage also proved to be appropriate to deep-sea studies considering the fact that there was no fouling on the cages and no evidence of any organism establishing residence on or adjacent to it. Environmental changes inside the cages were dependent on the experimental period analysed. In the 4y experiment, chlorophyll a concentrations were higher in the uppermost centimeter of sediment inside cages whereas in the 2y experiment, it did not differ between inside and outside. Although the cages caused some changes to the sedimentary regime, they are relatively minor compared to similar studies in shallow water. The only parameter that was significantly higher under cages at both experiments was the concentration of phaeopigments. Since the epibenthic megafauna at our study site can potentially affect phytodetritus distribution and availability at the seafloor (e.g. via consumption, disaggregation and burial), we suggest that their exclusion was, at least in part, responsible for the increases in pigment concentrations. Cages might be suitable tools to study the long-term effects of disturbances caused by megafaunal organisms on the diversity and community structure of smaller-sized organisms in the deep sea, although further work employing partial cage controls, greater replication, and evaluating faunal components will be essential to unequivocally establish their utility.
Resumo:
Due to huge popularity of portable terminals based on Wireless LANs and increasing demand for multimedia services from these terminals, the earlier structures and protocols are insufficient to cover the requirements of emerging networks and communications. Most research in this field is tailored to find more efficient ways to optimize the quality of wireless LAN regarding the requirements of multimedia services. Our work is to investigate the effects of modulation modes at the physical layer, retry limits at the MAC layer and packet sizes at the application layer over the quality of media packet transmission. Interrelation among these parameters to extract a cross-layer idea will be discussed as well. We will show how these parameters from different layers jointly contribute to the performance of service delivery by the network. The results obtained could form a basis to suggest independent optimization in each layer (an adaptive approach) or optimization of a set of parameters from different layers (a cross-layer approach). Our simulation model is implemented in the NS-2 simulator. Throughput and delay (latency) of packet transmission are the quantities of our assessments. © 2010 IEEE.
Resumo:
The enterprise management approach provides a holistic view of organizations and their related information systems. In order to cope with the globalization, virtualization, and volatile competitive environment, traditional firms are seeking to reconstruct their organizational structures and establish new IS architectures to transform from single autonomous entities into more open enterprises supported by new Enterprise Resource Planning (ERP) systems. This paper reports on ERP engage-abilities within three different enterprise management patterns based on the theoretical foundations of the "Dynamic Enterprise Reference Grid". An exploratory inductive study in Zoomlion using the narrative research approach has been conducted. Also, this research delivers a conceptual framework to demonstrate the adoption of ERP in the three enterprise management structures and points to a new architectural type (ERPIII) for operating in the virtual enterprise paradigm. © 2010 Springer-Verlag.
Resumo:
Social attitudes, attitudes toward financial risk and attitudes toward deferred gratification are thought to influence many important economic decisions over the life-course. In economic theory, these attitudes are key components in diverse models of behavior, including collective action, saving and investment decisions and occupational choice. The relevance of these attitudes have been confirmed empirically. Yet, the factors that influence them are not well understood. This research evaluates how these attitudes are affected by large disruptive events, namely, a natural disaster and a civil conflict, and also by an individual-specific life event, namely, having children.
By implementing rigorous empirical strategies drawing on rich longitudinal datasets, this research project advances our understanding of how life experiences shape these attitudes. Moreover, compelling evidence is provided that the observed changes in attitudes are likely to reflect changes in preferences given that they are not driven just by changes in financial circumstances. Therefore the findings of this research project also contribute to the discussion of whether preferences are really fixed, a usual assumption in economics.
In the first chapter, I study how altruistic and trusting attitudes are affected by exposure to the 2004 Indian Ocean tsunami as long as ten years after the disaster occurred. Establishing a causal relationship between natural disasters and attitudes presents several challenges as endogenous exposure and sample selection can confound the analysis. I take on these challenges by exploiting plausibly exogenous variation in exposure to the tsunami and by relying on a longitudinal dataset representative of the pre-tsunami population in two districts of Aceh, Indonesia. The sample is drawn from the Study of the Tsunami Aftermath and Recovery (STAR), a survey with data collected both before and after the disaster and especially designed to identify the impact of the tsunami. The altruistic and trusting attitudes of the respondents are measured by their behavior in the dictator and trust games. I find that witnessing closely the damage caused by the tsunami but without suffering severe economic damage oneself increases altruistic and trusting behavior, particularly towards individuals from tsunami affected communities. Having suffered severe economic damage has no impact on altruistic behavior but may have increased trusting behavior. These effects do not seem to be caused by the consequences of the tsunami on people’s financial situation. Instead they are consistent with how experiences of loss and solidarity may have shaped social attitudes by affecting empathy and perceptions of who is deserving of aid and trust.
In the second chapter, co-authored with Ryan Brown, Duncan Thomas and Andrea Velasquez, we investigate how attitudes toward financial risk are affected by elevated levels of insecurity and uncertainty brought on by the Mexican Drug War. To conduct our analysis, we pair the Mexican Family Life Survey (MxFLS), a rich longitudinal dataset ideally suited for our purposes, with a dataset on homicide rates at the month and municipality-level. The homicide rates capture well the overall crime environment created by the drug war. The MxFLS elicits risk attitudes by asking respondents to choose between hypothetical gambles with different payoffs. Our strategy to identify a causal effect has two key components. First, we implement an individual fixed effects strategy which allows us to control for all time-invariant heterogeneity. The remaining time variant heterogeneity is unlikely to be correlated with changes in the local crime environment given the well-documented political origins of the Mexican Drug War. We also show supporting evidence in this regard. The second component of our identification strategy is to use an intent-to-treat approach to shield our estimates from endogenous migration. Our findings indicate that exposure to greater local-area violent crime results in increased risk aversion. This effect is not driven by changes in financial circumstances, but may be explained instead by heightened fear of victimization. Nonetheless, we find that having greater economic resources mitigate the impact. This may be due to individuals with greater economic resources being able to avoid crime by affording better transportation or security at work.
The third chapter, co-authored with Duncan Thomas, evaluates whether attitudes toward deferred gratification change after having children. For this study we also exploit the MxFLS, which elicits attitudes toward deferred gratification (commonly known as time discounting) by asking individuals to choose between hypothetical payments at different points in time. We implement a difference-in-difference estimator to control for all time-invariant heterogeneity and show that our results are robust to the inclusion of time varying characteristics likely correlated with child birth. We find that becoming a mother increases time discounting especially in the first two years after childbirth and in particular for those women without a spouse at home. Having additional children does not have an effect and the effect for men seems to go in the opposite direction. These heterogeneous effects suggest that child rearing may affect time discounting due to generated stress or not fully anticipated spending needs.
Resumo:
This dissertation consists of three distinct components: (1) “Double Rainbow,” a notated composition for an acoustic ensemble of 10 instruments, ca. 36 minutes. (2) “Appalachiana”, a fixed-media composition for electro-acoustic music and video, ca. 30 minutes, and (3) “'The Invisible Mass': Exploring Compositional Technique in Alfred Schnittke’s Second Symphony”, an analytical article.
(1) Double Rainbow is a ca. 36 minute composition in four movements scored for 10 instruments: flute, Bb clarinet (doubling on bass clarinet), tenor saxophone (doubling on alto saxophone), french horn, percussion (glockenspiel, vibraphone, wood block, 3 toms, snare drum, bass drum, suspended cymbal), piano, violin, viola, cello, and double bass. Each of the four movements of the piece explore their own distinct character and set of compositional goals. The piece is presented as a musical score and as a recording, which was extensively treated in post-production.
(2) Appalachiana, is a ca. 30 minute fixed-media composition for music and video. The musical component was created as a vehicle to showcase several approaches to electro-acoustic music composition –fft re-synthesis for time manipulation effects, the use of a custom-built software instrument which implements generative approaches to creating rhythm and pitch patterns, using a recording of rain to create rhythmic triggers for software instruments, and recording additional components with acoustic instruments. The video component transforms footage of natural landscapes filmed at several locations in North Carolina, Virginia, and West Virginia into a surreal narrative using a variety of color, lighting, distortion, and time-manipulation video effects.
(3) “‘The Invisible Mass:’ Exploring Compositional Technique in Alfred Schnittke’s Second Symphony” is an analytical article that focuses on Alfred Schnittke’s compositional technique as evidenced in the construction of his Second Symphony and discussed by the composer in a number of previously untranslated articles and interviews. Though this symphony is pivotal in the composer’s oeuvre, there are currently no scholarly articles that offer in-depth analyses of the piece. The article combines analyses of the harmony, form, and orchestration in the Second Symphony with relevant quotations from the composer, some from published and translated sources and others newly translated by the author from research at the Russian State Library in St. Petersburg. These offer a perspective on how Schnittke’s compositional technique combines systematic geometric design with keen musical intuition.
Resumo:
This paper examines the remarkable and unexplored correspondence between games (and board games in particular) and what is commonly understood as theory in the social sciences. It argues that games exhibit many if not most of the attributes of theory, but that theory is missing some of the features of games. As such, game provide a way of rethinking what we mean by theory and theorizing. Specifically, games and their relationship with the ‘real’ world, provide a way of thinking about theory and theorizing that is consistent with recent calls to frame social inquiry around the concept of phrónēsis.