910 resultados para Computer network protocols
Resumo:
Most of the proposed key management protocols for wireless sensor networks (WSNs) in the literature assume that a single base station is used and that the base station is trustworthy. However, there are applications in which multiple base stations are used and the security of the base stations must be considered. This paper investigates a key management protocol in wireless sensor networks which include multiple base stations. We consider the situations in which both the base stations and the sensor nodes can be compromised. The proposed key management protocol, mKeying, includes two schemes, a key distribution scheme, mKeyDist, supporting multiple base stations in the network, and a key revocation scheme, mKeyRev, used to efficiently remove the compromised nodes from the network. Our analyses show that the proposed protocol is efficient and secure against the compromise of the base stations and the sensor nodes.
Resumo:
Translucent WDM optical networks use sparse placement of regenerators to overcome the impairments and wavelength contention introduced by fully transparent networks, and achieve a performance close to fully opaque networks with much less cost. Our previous study proved the feasibility of translucent networks using sparse regeneration technique. We addressed the placement of regenerators based on static schemes allowing only fixed number of regenerators at fixed locations. This paper furthers the study by proposing a suite of dynamical routing schemes. Dynamic allocation, advertisement and discovery of regeneration resources are proposed to support sharing transmitters and receivers between regeneration and access functions. This study follows the current trend in optical networking industry by utilizing extension of IP control protocols. Dynamic routing algorithms, aware of current regeneration resources and link states, are designed to smartly route the connection requests under quality constraints. A hierarchical network model, supported by the MPLS-based control plane, is also proposed to provide scalability. Experiments show that network performance is improved without placement of extra regenerators.
Resumo:
The design of a network is a solution to several engineering and science problems. Several network design problems are known to be NP-hard, and population-based metaheuristics like evolutionary algorithms (EAs) have been largely investigated for such problems. Such optimization methods simultaneously generate a large number of potential solutions to investigate the search space in breadth and, consequently, to avoid local optima. Obtaining a potential solution usually involves the construction and maintenance of several spanning trees, or more generally, spanning forests. To efficiently explore the search space, special data structures have been developed to provide operations that manipulate a set of spanning trees (population). For a tree with n nodes, the most efficient data structures available in the literature require time O(n) to generate a new spanning tree that modifies an existing one and to store the new solution. We propose a new data structure, called node-depth-degree representation (NDDR), and we demonstrate that using this encoding, generating a new spanning forest requires average time O(root n). Experiments with an EA based on NDDR applied to large-scale instances of the degree-constrained minimum spanning tree problem have shown that the implementation adds small constants and lower order terms to the theoretical bound.
Resumo:
There is an urgent need for expanding the number of brain banks serving psychiatric research. We describe here the Psychiatric Disorders arm of the Brain Bank of the Brazilian Aging Brain Study Group (Psy-BBBABSG), which is focused in bipolar disorder (BD) and obsessive compulsive disorder (OCD). Our protocol was designed to minimize limitations faced by previous initiatives, and to enable design-based neurostereological analyses. The Psy-BBBABSG first milestone is the collection of 10 brains each of BD and OCD patients, and matched controls. The brains are sourced from a population-based autopsy service. The clinical and psychiatric assessments were done by an expert team including psychiatrists, through an informant. One hemisphere was perfused-fixed to render an optimal fixation for conducting neurostereological studies. The other hemisphere was comprehensively dissected and frozen for molecular studies. In 20 months, we collected 36 brains. A final report was completed for 14 cases: 3 BDs, 4 major depressive disorders, 1 substance use disorder, 1 mood disorder NOS, 3 obsessive compulsive spectrum symptoms, 1 OCD and 1 schizophrenia. The majority were male (64%), and the average age at death was 67.2 +/- 9.0 years. The average postmortem interval was 16 h. Three matched controls were collected. The pilot stage confirmed that the protocols are well fitted to reach our goals. Our unique autopsy source makes possible to collect a fairly number of high quality cases in a short time. Such a collection offers an additional to the international research community to advance the understanding on neuropsychiatric diseases.
Resumo:
The ability to transmit and amplify weak signals is fundamental to signal processing of artificial devices in engineering. Using a multilayer feedforward network of coupled double-well oscillators as well as Fitzhugh-Nagumo oscillators, we here investigate the conditions under which a weak signal received by the first layer can be transmitted through the network with or without amplitude attenuation. We find that the coupling strength and the nodes' states of the first layer act as two-state switches, which determine whether the transmission is significantly enhanced or exponentially decreased. We hope this finding is useful for designing artificial signal amplifiers.
Resumo:
Traditional supervised data classification considers only physical features (e. g., distance or similarity) of the input data. Here, this type of learning is called low level classification. On the other hand, the human (animal) brain performs both low and high orders of learning and it has facility in identifying patterns according to the semantic meaning of the input data. Data classification that considers not only physical attributes but also the pattern formation is, here, referred to as high level classification. In this paper, we propose a hybrid classification technique that combines both types of learning. The low level term can be implemented by any classification technique, while the high level term is realized by the extraction of features of the underlying network constructed from the input data. Thus, the former classifies the test instances by their physical features or class topologies, while the latter measures the compliance of the test instances to the pattern formation of the data. Our study shows that the proposed technique not only can realize classification according to the pattern formation, but also is able to improve the performance of traditional classification techniques. Furthermore, as the class configuration's complexity increases, such as the mixture among different classes, a larger portion of the high level term is required to get correct classification. This feature confirms that the high level classification has a special importance in complex situations of classification. Finally, we show how the proposed technique can be employed in a real-world application, where it is capable of identifying variations and distortions of handwritten digit images. As a result, it supplies an improvement in the overall pattern recognition rate.
Resumo:
Failure detection is at the core of most fault tolerance strategies, but it often depends on reliable communication. We present new algorithms for failure detectors which are appropriate as components of a fault tolerance system that can be deployed in situations of adverse network conditions (such as loosely connected and administered computing grids). It packs redundancy into heartbeat messages, thereby improving on the robustness of the traditional protocols. Results from experimental tests conducted in a simulated environment with adverse network conditions show significant improvement over existing solutions.
Resumo:
Semisupervised learning is a machine learning approach that is able to employ both labeled and unlabeled samples in the training process. In this paper, we propose a semisupervised data classification model based on a combined random-preferential walk of particles in a network (graph) constructed from the input dataset. The particles of the same class cooperate among themselves, while the particles of different classes compete with each other to propagate class labels to the whole network. A rigorous model definition is provided via a nonlinear stochastic dynamical system and a mathematical analysis of its behavior is carried out. A numerical validation presented in this paper confirms the theoretical predictions. An interesting feature brought by the competitive-cooperative mechanism is that the proposed model can achieve good classification rates while exhibiting low computational complexity order in comparison to other network-based semisupervised algorithms. Computer simulations conducted on synthetic and real-world datasets reveal the effectiveness of the model.
Resumo:
Over the last decade, Brazil has pioneered an innovative model of branchless banking, known as correspondent banking, involving distribution partnership between banks, several kinds of retailers and a variety of other participants, which have allowed an unprecedented growth in bank outreach and became a reference worldwide. However, despite the extensive number of studies recently developed focusing on Brazilian branchless banking, there exists a clear research gap in the literature. It is still necessary to identify the different business configurations involving network integration through which the branchless banking channel can be structured, as well as the way they relate to the range of bank services delivered. Given this gap, our objective is to investigate the relationship between network integration models and services delivered through the branchless banking channel. Based on twenty interviews with managers involved with the correspondent banking business and data collected on almost 300 correspondent locations, our research is developed in two steps. First, we created a qualitative taxonomy through which we identified three classes of network integration models. Second, we performed a cluster analysis to explain the groups of financial services that fit each model. By contextualizing correspondents' network integration processes through the lens of transaction costs economics, our results suggest that the more suited to deliver social-oriented, "pro-poor'' services the channel is, the more it is controlled by banks. This research offers contributions to managers and policy makers interested in understanding better how different correspondent banking configurations are related with specific portfolios of services. Researchers interested in the subject of branchless banking can also benefit from the taxonomy presented and the transaction costs analysis of this kind of banking channel, which has been adopted in a number of developing countries all over the world now. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Questa dissertazione esamina le sfide e i limiti che gli algoritmi di analisi di grafi incontrano in architetture distribuite costituite da personal computer. In particolare, analizza il comportamento dell'algoritmo del PageRank così come implementato in una popolare libreria C++ di analisi di grafi distribuiti, la Parallel Boost Graph Library (Parallel BGL). I risultati qui presentati mostrano che il modello di programmazione parallela Bulk Synchronous Parallel è inadatto all'implementazione efficiente del PageRank su cluster costituiti da personal computer. L'implementazione analizzata ha infatti evidenziato una scalabilità negativa, il tempo di esecuzione dell'algoritmo aumenta linearmente in funzione del numero di processori. Questi risultati sono stati ottenuti lanciando l'algoritmo del PageRank della Parallel BGL su un cluster di 43 PC dual-core con 2GB di RAM l'uno, usando diversi grafi scelti in modo da facilitare l'identificazione delle variabili che influenzano la scalabilità. Grafi rappresentanti modelli diversi hanno dato risultati differenti, mostrando che c'è una relazione tra il coefficiente di clustering e l'inclinazione della retta che rappresenta il tempo in funzione del numero di processori. Ad esempio, i grafi Erdős–Rényi, aventi un basso coefficiente di clustering, hanno rappresentato il caso peggiore nei test del PageRank, mentre i grafi Small-World, aventi un alto coefficiente di clustering, hanno rappresentato il caso migliore. Anche le dimensioni del grafo hanno mostrato un'influenza sul tempo di esecuzione particolarmente interessante. Infatti, si è mostrato che la relazione tra il numero di nodi e il numero di archi determina il tempo totale.
Resumo:
In this thesis the use of widefield imaging techniques and VLBI observations with a limited number of antennas are explored. I present techniques to efficiently and accurately image extremely large UV datasets. Very large VLBI datasets must be reduced into multiple, smaller datasets if today’s imaging algorithms are to be used to image them. I present a procedure for accurately shifting the phase centre of a visibility dataset. This procedure has been thoroughly tested and found to be almost two orders of magnitude more accurate than existing techniques. Errors have been found at the level of one part in 1.1 million. These are unlikely to be measurable except in the very largest UV datasets. Results of a four-station VLBI observation of a field containing multiple sources are presented. A 13 gigapixel image was constructed to search for sources across the entire primary beam of the array by generating over 700 smaller UV datasets. The source 1320+299A was detected and its astrometric position with respect to the calibrator J1329+3154 is presented. Various techniques for phase calibration and imaging across this field are explored including using the detected source as an in-beam calibrator and peeling of distant confusing sources from VLBI visibility datasets. A range of issues pertaining to wide-field VLBI have been explored including; parameterising the wide-field performance of VLBI arrays; estimating the sensitivity across the primary beam both for homogeneous and heterogeneous arrays; applying techniques such as mosaicing and primary beam correction to VLBI observations; quantifying the effects of time-average and bandwidth smearing; and calibration and imaging of wide-field VLBI datasets. The performance of a computer cluster at the Istituto di Radioastronomia in Bologna has been characterised with regard to its ability to correlate using the DiFX software correlator. Using existing software it was possible to characterise the network speed particularly for MPI applications. The capabilities of the DiFX software correlator, running on this cluster, were measured for a range of observation parameters and were shown to be commensurate with the generic performance parameters measured. The feasibility of an Italian VLBI array has been explored, with discussion of the infrastructure required, the performance of such an array, possible collaborations, and science which could be achieved. Results from a 22 GHz calibrator survey are also presented. 21 out of 33 sources were detected on a single baseline between two Italian antennas (Medicina to Noto). The results and discussions presented in this thesis suggest that wide-field VLBI is a technique whose time has finally come. Prospects for exciting new science are discussed in the final chapter.
Resumo:
The term Ambient Intelligence (AmI) refers to a vision on the future of the information society where smart, electronic environment are sensitive and responsive to the presence of people and their activities (Context awareness). In an ambient intelligence world, devices work in concert to support people in carrying out their everyday life activities, tasks and rituals in an easy, natural way using information and intelligence that is hidden in the network connecting these devices. This promotes the creation of pervasive environments improving the quality of life of the occupants and enhancing the human experience. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. Ambient intelligent systems are heterogeneous and require an excellent cooperation between several hardware/software technologies and disciplines, including signal processing, networking and protocols, embedded systems, information management, and distributed algorithms. Since a large amount of fixed and mobile sensors embedded is deployed into the environment, the Wireless Sensor Networks is one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes which can be deployed in a target area to sense physical phenomena and communicate with other nodes and base stations. These simple devices typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). WNS promises of revolutionizing the interactions between the real physical worlds and human beings. Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. To fully exploit the potential of distributed sensing approaches, a set of challengesmust be addressed. Sensor nodes are inherently resource-constrained systems with very low power consumption and small size requirements which enables than to reduce the interference on the physical phenomena sensed and to allow easy and low-cost deployment. They have limited processing speed,storage capacity and communication bandwidth that must be efficiently used to increase the degree of local ”understanding” of the observed phenomena. A particular case of sensor nodes are video sensors. This topic holds strong interest for a wide range of contexts such as military, security, robotics and most recently consumer applications. Vision sensors are extremely effective for medium to long-range sensing because vision provides rich information to human operators. However, image sensors generate a huge amount of data, whichmust be heavily processed before it is transmitted due to the scarce bandwidth capability of radio interfaces. In particular, in video-surveillance, it has been shown that source-side compression is mandatory due to limited bandwidth and delay constraints. Moreover, there is an ample opportunity for performing higher-level processing functions, such as object recognition that has the potential to drastically reduce the required bandwidth (e.g. by transmitting compressed images only when something ‘interesting‘ is detected). The energy cost of image processing must however be carefully minimized. Imaging could play and plays an important role in sensing devices for ambient intelligence. Computer vision can for instance be used for recognising persons and objects and recognising behaviour such as illness and rioting. Having a wireless camera as a camera mote opens the way for distributed scene analysis. More eyes see more than one and a camera system that can observe a scene from multiple directions would be able to overcome occlusion problems and could describe objects in their true 3D appearance. In real-time, these approaches are a recently opened field of research. In this thesis we pay attention to the realities of hardware/software technologies and the design needed to realize systems for distributed monitoring, attempting to propose solutions on open issues and filling the gap between AmI scenarios and hardware reality. The physical implementation of an individual wireless node is constrained by three important metrics which are outlined below. Despite that the design of the sensor network and its sensor nodes is strictly application dependent, a number of constraints should almost always be considered. Among them: • Small form factor to reduce nodes intrusiveness. • Low power consumption to reduce battery size and to extend nodes lifetime. • Low cost for a widespread diffusion. These limitations typically result in the adoption of low power, low cost devices such as low powermicrocontrollers with few kilobytes of RAMand tenth of kilobytes of program memory with whomonly simple data processing algorithms can be implemented. However the overall computational power of the WNS can be very large since the network presents a high degree of parallelism that can be exploited through the adoption of ad-hoc techniques. Furthermore through the fusion of information from the dense mesh of sensors even complex phenomena can be monitored. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas:Low Power Video Sensor Node and Video Processing Alghoritm and Multimodal Surveillance . Low Power Video Sensor Nodes and Video Processing Alghoritms In comparison to scalar sensors, such as temperature, pressure, humidity, velocity, and acceleration sensors, vision sensors generate much higher bandwidth data due to the two-dimensional nature of their pixel array. We have tackled all the constraints listed above and have proposed solutions to overcome the current WSNlimits for Video sensor node. We have designed and developed wireless video sensor nodes focusing on the small size and the flexibility of reuse in different applications. The video nodes target a different design point: the portability (on-board power supply, wireless communication), a scanty power budget (500mW),while still providing a prominent level of intelligence, namely sophisticated classification algorithmand high level of reconfigurability. We developed two different video sensor node: The device architecture of the first one is based on a low-cost low-power FPGA+microcontroller system-on-chip. The second one is based on ARM9 processor. Both systems designed within the above mentioned power envelope could operate in a continuous fashion with Li-Polymer battery pack and solar panel. Novel low power low cost video sensor nodes which, in contrast to sensors that just watch the world, are capable of comprehending the perceived information in order to interpret it locally, are presented. Featuring such intelligence, these nodes would be able to cope with such tasks as recognition of unattended bags in airports, persons carrying potentially dangerous objects, etc.,which normally require a human operator. Vision algorithms for object detection, acquisition like human detection with Support Vector Machine (SVM) classification and abandoned/removed object detection are implemented, described and illustrated on real world data. Multimodal surveillance: In several setup the use of wired video cameras may not be possible. For this reason building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. Energy efficiency for wireless smart camera networks is one of the major efforts in distributed monitoring and surveillance community. For this reason, building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. The Pyroelectric Infra-Red (PIR) sensors have been used to extend the lifetime of a solar-powered video sensor node by providing an energy level dependent trigger to the video camera and the wireless module. Such approach has shown to be able to extend node lifetime and possibly result in continuous operation of the node.Being low-cost, passive (thus low-power) and presenting a limited form factor, PIR sensors are well suited for WSN applications. Moreover techniques to have aggressive power management policies are essential for achieving long-termoperating on standalone distributed cameras needed to improve the power consumption. We have used an adaptive controller like Model Predictive Control (MPC) to help the system to improve the performances outperforming naive power management policies.
Resumo:
Summary PhD Thesis Jan Pollmann: This thesis focuses on global scale measurements of light reactive non-methane hydrocarbon (NMHC), in the volatility range from ethane to toluene with a special focus on ethane, propane, isobutane, butane, isopentane and pentane. Even though they only occur at the ppt level (nmol mol-1) in the remote troposphere these species can yield insight into key atmospheric processes. An analytical method was developed and subsequently evaluated to analyze NMHC from the NOAA – ERSL cooperative air sampling network. Potential analytical interferences through other atmospheric trace gases (water vapor and ozone) were carefully examined. The analytical parameters accuracy and precision were analyzed in detail. It was proven that more than 90% of the data points meet the Global Atmospheric Watch (GAW) data quality objective. Trace gas measurements from 28 measurement stations were used to derive the global atmospheric distribution profile for 4 NMHC (ethane, propane, isobutane, butane). A close comparison of the derived ethane data with previously published reports showed that northern hemispheric ethane background mixing ratio declined by approximately 30% since 1990. No such change was observed for southern hemispheric ethane. The NMHC data and trace gas data supplied by NOAA ESRL were used to estimate local diurnal averaged hydroxyl radical (OH) mixing ratios by variability analysis. Comparison of the variability derived OH with directly measured OH and modeled OH mixing ratios were found in good agreement outside the tropics. Tropical OH was on average two times higher than predicted by the model. Variability analysis was used to assess the effect of chlorine radicals on atmospheric oxidation chemistry. It was found that Cl is probably not of significant relevance on a global scale.
Resumo:
Tiefherd-Beben, die im oberen Erdmantel in einer Tiefe von ca. 400 km auftreten, werden gewöhnlich mit dem in gleicher Tiefe auftretenden druckabhängigen, polymorphen Phasenübergang von Olivine (α-Phase) zu Spinel (β-Phase) in Verbindung gebracht. Es ist jedoch nach wie vor unklar, wie der Phasenübergang mit dem mechanischen Versagen des Mantelmaterials zusammenhängt. Zur Zeit werden im Wesentlichen zwei Modelle diskutiert, die entweder Mikrostrukturen, die durch den Phasenübergang entstehen, oder aber die rheologischen Veränderungen des Mantelgesteins durch den Phasenübergang dafür verantwortlich machen. Dabei sind Untersuchungen der Olivin→Spinel Umwandlung durch die Unzugänglichkeit des natürlichen Materials vollständig auf theoretische Überlegungen sowie Hochdruck-Experimente und Numerische Simulationen beschränkt. Das zentrale Thema dieser Dissertation war es, ein funktionierendes Computermodell zur Simulation der Mikrostrukturen zu entwickeln, die durch den Phasenübergang entstehen. Des Weiteren wurde das Computer Modell angewandt um die mikrostrukturelle Entwicklung von Spinelkörnern und die Kontrollparameter zu untersuchen. Die Arbeit ist daher in zwei Teile unterteilt: Der erste Teil (Kap. 2 und 3) behandelt die physikalischen Gesetzmäßigkeiten und die prinzipielle Funktionsweise des Computer Modells, das auf der Kombination von Gleichungen zur Errechnung der kinetischen Reaktionsgeschwindigkeit mit Gesetzen der Nichtgleichgewichtsthermodynamik unter nicht-hydostatischen Bedingungen beruht. Das Computermodell erweitert ein Federnetzwerk der Software latte aus dem Programmpaket elle. Der wichtigste Parameter ist dabei die Normalspannung auf der Kornoberfläche von Spinel. Darüber hinaus berücksichtigt das Programm die Latenzwärme der Reaktion, die Oberflächenenergie und die geringe Viskosität von Mantelmaterial als weitere wesentliche Parameter in der Berechnung der Reaktionskinetic. Das Wachstumsverhalten und die fraktale Dimension von errechneten Spinelkörnern ist dabei in guter Übereinstimmung mit Spinelstrukturen aus Hochdruckexperimenten. Im zweiten Teil der Arbeit wird das Computermodell angewandt, um die Entwicklung der Oberflächenstruktur von Spinelkörnern unter verschiedenen Bedigungen zu eruieren. Die sogenannte ’anticrack theory of faulting’, die den katastrophalen Verlauf der Olivine→Spinel Umwandlung in olivinhaltigem Material unter differentieller Spannung durch Spannungskonzentrationen erklärt, wurde anhand des Computermodells untersucht. Der entsprechende Mechanismus konnte dabei nicht bestätigt werden. Stattdessen können Oberflächenstrukturen, die Ähnlichkeiten zu Anticracks aufweisen, durch Unreinheiten des Materials erklärt werden (Kap. 4). Eine Reihe von Simulationen wurde der Herleitung der wichtigsten Kontrollparameter der Reaktion in monomineralischem Olivin gewidmet (Kap. 5 and Kap. 6). Als wichtigste Einflüsse auf die Kornform von Spinel stellten sich dabei die Hauptnormalspannungen auf dem System sowie Heterogenitäten im Wirtsminerals und die Viskosität heraus. Im weiteren Verlauf wurden die Nukleierung und das Wachstum von Spinel in polymineralischen Mineralparagenesen untersucht (Kap. 7). Die Reaktionsgeschwindigkeit der Olivine→Spinel Umwandlung und die Entwicklung von Spinelnetzwerken und Clustern wird durch die Gegenwart nicht-reaktiver Minerale wie Granat oder Pyroxen erheblich beschleunigt. Die Bildung von Spinelnetzwerken hat das Potential, die mechanischen Eigenschaften von Mantelgestein erheblich zu beeinflussen, sei es durch die Bildung potentieller Scherzonen oder durch Gerüstbildung. Dieser Lokalisierungprozess des Spinelwachstums in Mantelgesteinen kann daher ein neues Erklärungsmuster für Tiefbeben darstellen.
Resumo:
Mainstream hardware is becoming parallel, heterogeneous, and distributed on every desk, every home and in every pocket. As a consequence, in the last years software is having an epochal turn toward concurrency, distribution, interaction which is pushed by the evolution of hardware architectures and the growing of network availability. This calls for introducing further abstraction layers on top of those provided by classical mainstream programming paradigms, to tackle more effectively the new complexities that developers have to face in everyday programming. A convergence it is recognizable in the mainstream toward the adoption of the actor paradigm as a mean to unite object-oriented programming and concurrency. Nevertheless, we argue that the actor paradigm can only be considered a good starting point to provide a more comprehensive response to such a fundamental and radical change in software development. Accordingly, the main objective of this thesis is to propose Agent-Oriented Programming (AOP) as a high-level general purpose programming paradigm, natural evolution of actors and objects, introducing a further level of human-inspired concepts for programming software systems, meant to simplify the design and programming of concurrent, distributed, reactive/interactive programs. To this end, in the dissertation first we construct the required background by studying the state-of-the-art of both actor-oriented and agent-oriented programming, and then we focus on the engineering of integrated programming technologies for developing agent-based systems in their classical application domains: artificial intelligence and distributed artificial intelligence. Then, we shift the perspective moving from the development of intelligent software systems, toward general purpose software development. Using the expertise maturated during the phase of background construction, we introduce a general-purpose programming language named simpAL, which founds its roots on general principles and practices of software development, and at the same time provides an agent-oriented level of abstraction for the engineering of general purpose software systems.