986 resultados para File systems
Resumo:
Thesis (Master, Computing) -- Queen's University, 2016-05-29 18:11:34.114
Resumo:
"COO-1469-0034."
Resumo:
"Research was supported by the United States Air Force through the Air Force Office of Scientific Research, Air Research and Development Command."
Resumo:
Mode of access: Internet.
Resumo:
"Rev. 8, 1999."
Resumo:
One of the obstacles to improved security of the Internet is ad hoc development of technologies with different design goals and different security goals. This paper proposes reconceptualizing the Internet as a secure distributed system, focusing specifically on the application layer. The notion is to redesign specific functionality, based on principles discovered in research on distributed systems in the decades since the initial development of the Internet. Because of the problems in retrofitting new technology across millions of clients and servers, any options with prospects of success must support backward compatibility. This paper outlines a possible new architecture for internet-based mail which would replace existing protocols by a more secure framework. To maintain backward compatibility, initial implementation could offer a web browser-based front end but the longer-term approach would be to implement the system using appropriate models of replication. (C) 2005 Elsevier Ltd. All rights reserved.
Resumo:
The developments of models in Earth Sciences, e.g. for earthquake prediction and for the simulation of mantel convection, are fare from being finalized. Therefore there is a need for a modelling environment that allows scientist to implement and test new models in an easy but flexible way. After been verified, the models should be easy to apply within its scope, typically by setting input parameters through a GUI or web services. It should be possible to link certain parameters to external data sources, such as databases and other simulation codes. Moreover, as typically large-scale meshes have to be used to achieve appropriate resolutions, the computational efficiency of the underlying numerical methods is important. Conceptional this leads to a software system with three major layers: the application layer, the mathematical layer, and the numerical algorithm layer. The latter is implemented as a C/C++ library to solve a basic, computational intensive linear problem, such as a linear partial differential equation. The mathematical layer allows the model developer to define his model and to implement high level solution algorithms (e.g. Newton-Raphson scheme, Crank-Nicholson scheme) or choose these algorithms form an algorithm library. The kernels of the model are generic, typically linear, solvers provided through the numerical algorithm layer. Finally, to provide an easy-to-use application environment, a web interface is (semi-automatically) built to edit the XML input file for the modelling code. In the talk, we will discuss the advantages and disadvantages of this concept in more details. We will also present the modelling environment escript which is a prototype implementation toward such a software system in Python (see www.python.org). Key components of escript are the Data class and the PDE class. Objects of the Data class allow generating, holding, accessing, and manipulating data, in such a way that the actual, in the particular context best, representation is transparent to the user. They are also the key to establish connections with external data sources. PDE class objects are describing (linear) partial differential equation objects to be solved by a numerical library. The current implementation of escript has been linked to the finite element code Finley to solve general linear partial differential equations. We will give a few simple examples which will illustrate the usage escript. Moreover, we show the usage of escript together with Finley for the modelling of interacting fault systems and for the simulation of mantel convection.
Resumo:
In his study - File Control: The Heart Of Business Computer Management - William G. O'Brien, Assistant Professor, The School of Hospitality Management at Florida International University, initially informs you: “Even though computers are an everyday part of the hospitality industry, many managers lack the knowledge and experience to control and protect the files in these systems. The author offers guidelines which can minimize or prevent damage to the business as a whole.” Our author initially opens this study with some anecdotal instances illustrating the failure of hospitality managers to exercise due caution with regard to computer supported information systems inside their restaurants and hotels. “Of the three components that make up any business computer system (data files, programs, and hard-ware), it is files that are most important, perhaps irreplaceable, to the business,” O’Brien informs you. O’Brien breaks down the noun, files, into two distinct categories. They are, the files of extrinsic value, and its counterpart the files of intrinsic value. An example of extrinsic value files would be a restaurant’s wine inventory. “As sales are made and new shipments are received, the computer updates the file,” says O’Brien. “This information might come directly from a point-of-sale terminal or might be entered manually by an employee,” he further explains. On the intrinsic side of the equation, O’Brien wants you to know that the information itself is the valuable part of this type of file. Its value is over and above the file’s informational purpose as a pragmatic business tool, as it is in inventory control. “The information is money in the legal sense For instance, figures moved about in banking system computers do not represent dollars; they are dollars,” O’Brien explains. “If the record of a dollar amount is erased from all computer files, then that money ceases to exist,” he warns. This type of information can also be bought and sold, such as it is in customer lists to advertisers. Files must be protected O’Brien stresses. “File security requires a systematic approach,” he discloses. O’Brien goes on to explain important elements to consider when evaluating file information. File back-up is also an important factor to think about, along with file storage/safety concerns. “Sooner or later, every property will have its fire, flood, careless mistake, or disgruntled employee,” O’Brien closes. “…good file control can minimize or prevent damage to the business as a whole.”
Characterization of the defined MDC types and compilation of MDC initiation times (excel-file 19 kB)
Resumo:
Mud accumulates on continental shelves under a variety of environmental conditions and results in a diverse formation of mud depocenters (MDCs). Their three-dimensional architectures have been in the focus of several recent studies. Due to some terminological confusion concerning MDCs, the present study sets out to define eight individual MDC types in terms of surface sediment distribution and internal geometry. Under conditions of substantial sediment supply, prodeltas (distal zones off river deltas; triangular sheets), subaqueous deltas (disconnected from deltas by strong normal-to-shore currents; wedge-like clinoforms), and mud patches (scattered distribution) and mud blankets (widespread covers) are formed. Forced by hydrodynamic conditions, mud belts in the strict sense (detached from source; elongated bodies), and shallow-water contourite drifts (detached from source; growing normal to prevailing current direction; triangular clinoforms) develop. Controlled by local morphology, mud entrapments (in depressions, behind morphological steps) and mud wedges (triangular clinoforms growing in flow direction) are deposited. Shelf mud deposition took place (1) during early outer-shelf drowning (~14 ka), (2) after inner-shelf inundation to maximum flooding (9.5-6.5 ka), and (3) in sub-recent times (<2 ka). Subsequent expansion may be (1) concentric, in cases where the depocenter formed near the fluvial source, (2) uni-directional, extending along advective current transport paths, and (3) progradational, forming clinoforms that grow either parallel or normal to the bottom current direction. Classical mud belts may be initiated around defined nuclei, the remote sites of which are determined by seafloor morphology rather than the location of the source. From a stratigraphic perspective, mud depocenters coincide with sea-level highstand-related, shelf-wide condensed sections. They often show a conformable succession from transgressive to highstand systems tract stages.
Resumo:
Over the past few years, logging has evolved from from simple printf statements to more complex and widely used logging libraries. Today logging information is used to support various development activities such as fixing bugs, analyzing the results of load tests, monitoring performance and transferring knowledge. Recent research has examined how to improve logging practices by informing developers what to log and where to log. Furthermore, the strong dependence on logging has led to the development of logging libraries that have reduced the intricacies of logging, which has resulted in an abundance of log information. Two recent challenges have emerged as modern software systems start to treat logging as a core aspect of their software. In particular, 1) infrastructural challenges have emerged due to the plethora of logging libraries available today and 2) processing challenges have emerged due to the large number of log processing tools that ingest logs and produce useful information from them. In this thesis, we explore these two challenges. We first explore the infrastructural challenges that arise due to the plethora of logging libraries available today. As systems evolve, their logging infrastructure has to evolve (commonly this is done by migrating to new logging libraries). We explore logging library migrations within Apache Software Foundation (ASF) projects. We i find that close to 14% of the pro jects within the ASF migrate their logging libraries at least once. For processing challenges, we explore the different factors which can affect the likelihood of a logging statement changing in the future in four open source systems namely ActiveMQ, Camel, Cloudstack and Liferay. Such changes are likely to negatively impact the log processing tools that must be updated to accommodate such changes. We find that 20%-45% of the logging statements within the four systems are changed at least once. We construct random forest classifiers and Cox models to determine the likelihood of both just-introduced and long-lived logging statements changing in the future. We find that file ownership, developer experience, log density and SLOC are important factors in determining the stability of logging statements.
Resumo:
Part 4: Transition Towards Product-Service Systems
Resumo:
Part 3: Product-Service Systems
Resumo:
Part 21: Mobility and Logistics
Resumo:
Part 11: Reference and Conceptual Models
Resumo:
Part 8: Business Strategies Alignment