962 resultados para Checking accounts


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In handling large volumes of data such as chemical notations, serial numbers for books, etc., it is always advisable to provide checking methods which would indicate the presence of errors. The entire new discipline of coding theory is devoted to the study of the construction of codes which provide such error-detecting and correcting means.l Although these codes are very powerful, they are highly sophisticated from the point of view of practical implementation

Relevância:

20.00% 20.00%

Publicador:

Resumo:

International mergers and acquisitions (M&As) often invoke national identification and national cultural differences. We argue that metonymy is a central linguistic resource through which national cultural identities and differences are reproduced in media accounts of international M&As. In this paper, we focus on two revealing cases: the acquisition of American IBM Personal Computer Division (PCD) by the Chinese company Lenovo and the acquisition of American Anheuser-Busch (A-B) by the Belgian-Brazilian company InBev. First, we identify the forms, functions and frequencies of national metonymy in media accounts of these cases. We present a typology that classifies varieties of national metonymy in international M&As. Second, we demonstrate how these metonyms combine with metaphor to generate evocative imagery, engaging wit, and subversive irony. Our findings show that national metonymy contributes to the construction of emotive frames, stereotypes, ideological differences, and threats. Combinations of national metonymy with metaphor also provide powerful means to construct cultural differences. However, combinations of metonymy with wit and irony enable the play on meanings that overturns and resists national and cultural stereotypes. This is the first study to unpack the deployment of metonymy in accounts of international M&As. In doing so, it also opens up new avenues for research into international management and the analysis of tropes in management and organization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we present simple methods for construction and evaluation of finite-state spell-checking tools using an existing finite-state lexical automaton, freely available finite-state tools and Internet corpora acquired from projects such as Wikipedia. As an example, we use a freely available open-source implementation of Finnish morphology, made with traditional finite-state morphology tools, and demonstrate rapid building of Northern Sámi and English spell checkers from tools and resources available from the Internet.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The conventional Cornell's source-based approach of probabilistic seismic-hazard assessment (PSHA) has been employed all around the world, whilst many studies often rely on the use of computer packages such as FRISK (McGuire FRISK-a computer program for seismic risk analysis. Open-File Report 78-1007, United States Geological Survey, Department of Interior, Washington 1978) and SEISRISK III (Bender and Perkins SEISRISK III-a computer program for seismic hazard estimation, Bulletin 1772. United States Geological Survey, Department of Interior, Washington 1987). A ``black-box'' syndrome may be resulted if the user of the software does not have another simple and robust PSHA method that can be used to make comparisons. An alternative method for PSHA, namely direct amplitude-based (DAB) approach, has been developed as a heuristic and efficient method enabling users to undertake their own sanity checks on outputs from computer packages. This paper experiments the application of the DAB approach for three cities in China, Iran, and India, respectively, and compares with documented results computed by the source-based approach. Several insights regarding the procedure of conducting PSHA have also been obtained, which could be useful for future seismic-hazard studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Current standard security practices do not provide substantial assurance about information flow security: the end-to-end behavior of a computing system. Noninterference is the basic semantical condition used to account for information flow security. In the literature, there are many definitions of noninterference: Non-inference, Separability and so on. Mantel presented a framework of Basic Security Predicates (BSPs) for characterizing the definitions of noninterference in the literature. Model-checking these BSPs for finite state systems was shown to be decidable in [8]. In this paper, we show that verifying these BSPs for the more expressive system model of pushdown systems is undecidable. We also give an example of a simple security property which is undecidable even for finite-state systems: the property is a weak form of non-inference called WNI, which is not expressible in Mantel’s BSP framework.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper propose a unified error detection technique, based on stability checking, for on-line detection of delay, crosstalk and transient faults in combinational circuits and SEUs in sequential elements. The proposed method, called modified stability checking (MSC), overcomes the limitations of the earlier stability checking methods. The paper also proposed a novel checker circuit to realize this scheme. The checker is self-checking for a wide set of realistic internal faults including transient faults. Extensive circuit simulations have been done to characterize the checker circuit. A prototype checker circuit for a 1mm2 standard cell array has been implemented in a 0.13mum process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bisimulation-based information flow properties were introduced by Focardi and Gorrieri [1] as a way of specifying security properties for transition system models. These properties were shown to be decidable for finite-state systems. In this paper, we study the problem of verifying these properties for some well-known classes of infinite state systems. We show that all the properties are undecidable for each of these classes of systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Large software systems are developed by composing multiple programs. If the programs manip-ulate and exchange complex data, such as network packets or files, it is essential to establish that they follow compatible data formats. Most of the complexity of data formats is associated with the headers. In this paper, we address compatibility of programs operating over headers of network packets, files, images, etc. As format specifications are rarely available, we infer the format associated with headers by a program as a set of guarded layouts. In terms of these formats, we define and check compatibility of (a) producer-consumer programs and (b) different versions of producer (or consumer) programs. A compatible producer-consumer pair is free of type mismatches and logical incompatibilities such as the consumer rejecting valid outputs gen-erated by the producer. A backward compatible producer (resp. consumer) is guaranteed to be compatible with consumers (resp. producers) that were compatible with its older version. With our prototype tool, we identified 5 known bugs and 1 potential bug in (a) sender-receiver modules of Linux network drivers of 3 vendors and (b) different versions of a TIFF image library.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Counter systems are a well-known and powerful modeling notation for specifying infinite-state systems. In this paper we target the problem of checking liveness properties in counter systems. We propose two semi decision techniques towards this, both of which return a formula that encodes the set of reachable states of the system that satisfy a given liveness property. A novel aspect of our techniques is that they use reachability analysis techniques, which are well studied in the literature, as black boxes, and are hence able to compute precise answers on a much wider class of systems than previous approaches for the same problem. Secondly, they compute their results by iterative expansion or contraction, and hence permit an approximate solution to be obtained at any point. We state the formal properties of our techniques, and also provide experimental results using standard benchmarks to show the usefulness of our approaches. Finally, we sketch an extension of our liveness checking approach to check general CTL properties.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Resumen: La presente colaboración pretende valorar la ‘historicidad’ de los textos del Nuevo Testamento, de aquellos que tradicionalmente se ha considerado que presentan una ‘envoltura’ ambiental más fidedigna. Tal tarea no puede llevarse a cabo sin un previo discernimiento de las diferencias que sus distintos textos presentan, sin explicar los ambientes distintos en los que cada tradición se forjo, ni la intencionalidad de los distintos géneros a los que se recurrió. La obra de Lucas, constituida por la suma del tercer evangelio sinóptico y de los llamados Hechos de los apóstoles, presenta la percepción más evidentemente diacrónica, desde el nacimiento de Jesús hasta la instalación del cristianismo en Roma, y suma casi un tercio del texto neotestamentario, bastante más si tenemos en cuenta que para su comprensión es necesario el cotejo con los otros evangelios sinópticos y con las cartas paulinas. La perspectiva desde la cual se enfrenta este estudio es la del historiador, no la de la exégesis, la obra de Lucas se analiza como si se tratase de un texto más de la tradición helenística. Un texto que ha de responder, por lo tanto, a unos cánones literarios comprensibles a sus hipotéticos lectores, un texto construido en los años de máximo esplendor del Imperio romano, muy probablemente a finales del siglo I, en un contexto geográfico y cultural de momento impreciso pero que ha de tener en cuenta los problemas palestinos posteriores a la guerra judía de los años 67-70 y el entorno de pugna religiosa y creatividad teológica que, necesariamente, habría de caracterizar a una religión nueva, aún en proceso de formación y que estaba perfilando y perfeccionando sus definitivas señas de identidad. En este sentido se ha de valorar la personalidad del autor y su nivel de compromiso con el grupo religioso del cual pretende presentar una semblanza; por supuesto, es necesario descifrar la intencionalidad del texto, mediatizada por el género y por el público al cual pretende llegar. Debemos insertar la información particular que Lucas-Hechos aporta dentro de un contexto y, cuando sea posible, corroborar su información recurriendo a otras fuentes contemporáneas. A partir de ese proceso podremos concluir si la información aportada es verídica o no, si tal nivel de precisión es imposible podremos al menos pronunciarnos sobre si es creíble o si, por el contrario, es un mero artificio.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O controle interno está associado ao contexto da governança das organizações. Na administração pública brasileira, compete aos Poderes Executivo, Legislativo e Judiciário a manutenção de um sistema de controle interno integrado, conforme previsto na Constituição Federal. Os aspectos relacionados à governança são contemplados na Teoria da Agência, em que a relação entre principal e agente é marcada pela assimetria de informações e pelos conflitos de interesse. O objetivo deste estudo é investigar a evidenciação de princípios de governança nos relatórios de auditoria elaborados pelo órgão de controle interno da Marinha do Brasil. Trata-se de pesquisa descritiva, documental e ex post facto, conduzida pelo método de estudo de caso no Centro de Controle Interno da Marinha (CCIMAR). Devido à quantidade de material disponibilizado pelo órgão, o estudo foi limitado à investigação dos relatórios de auditoria de avaliação da gestão de 2012, tendo as unidades auditadas sido previamente selecionadas pelo Tribunal de Contas da União (TCU). Em 2012, o CCIMAR produziu seis relatórios de auditoria de avaliação da gestão, representando, portanto, a amostra de conveniência desta pesquisa. Para orientar a investigação, definiu-se um quadro de referência contemplando e integrando os princípios de governança abordados pelos seguintes estudos: Cadbury Committee (1992); Nolan Committee (1995); Ministério das Finanças da Holanda Timmers (2000); IFAC (2001); ANAO (2003); OECD (2004); e IBGC (2009). Os princípios finalmente selecionados para investigação foram Accountability, Equidade, Integridade e Transparência, associados, respectivamente, às palavras-chave prestação (ões) de contas / prestar contas, tratamento justo, confiabilidade / fidedignidade das informações / dos dados e disponibilidade / divulgação das informações / dos dados, definidas pelos contextos dos significados destacados no quadro de referência. Sendo assim, os princípios e as palavras-chave formaram o referencial de análise para investigar os relatórios de auditoria e receberam tratamento quanti-qualitativo. Após exame das ocorrências dos princípios e das palavras-chave nos relatórios compulsados, os resultados indicaram que: (1) o princípio da Accountability estava associado ao cumprimento de prazos e formalidades legais requeridas nos processos de prestação de contas públicas; (2) o princípio da Equidade foi evidenciado, essencialmente, na perspectiva interna das unidades auditadas, sendo percebido nas recomendações que contemplavam a atuação mais consistente e efetiva dos respectivos conselhos de gestão no gerenciamento das organizações; (3) o princípio da Integridade foi abordado nos relatórios tanto como atributo pessoal (integridade moral) dos agentes públicos, quanto como característica necessária das informações reportadas nos documentos emitidos pelos órgãos públicos; e (4) a Transparência foi mencionada como o princípio que proporciona a diminuição da assimetria informacional entre os stakeholders, permitindo que tenham acesso às informações relevantes, tais como a aplicação dos recursos públicos destinados às organizações da Marinha do Brasil.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many aspects of human motor behavior can be understood using optimality principles such as optimal feedback control. However, these proposed optimal control models are risk-neutral; that is, they are indifferent to the variability of the movement cost. Here, we propose the use of a risk-sensitive optimal controller that incorporates movement cost variance either as an added cost (risk-averse controller) or as an added value (risk-seeking controller) to model human motor behavior in the face of uncertainty. We use a sensorimotor task to test the hypothesis that subjects are risk-sensitive. Subjects controlled a virtual ball undergoing Brownian motion towards a target. Subjects were required to minimize an explicit cost, in points, that was a combination of the final positional error of the ball and the integrated control cost. By testing subjects on different levels of Brownian motion noise and relative weighting of the position and control cost, we could distinguish between risk-sensitive and risk-neutral control. We show that subjects change their movement strategy pessimistically in the face of increased uncertainty in accord with the predictions of a risk-averse optimal controller. Our results suggest that risk-sensitivity is a fundamental attribute that needs to be incorporated into optimal feedback control models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many aspects of human motor behavior can be understood using optimality principles such as optimal feedback control. However, these proposed optimal control models are risk-neutral; that is, they are indifferent to the variability of the movement cost. Here, we propose the use of a risk-sensitive optimal controller that incorporates movement cost variance either as an added cost (risk-averse controller) or as an added value (risk-seeking controller) to model human motor behavior in the face of uncertainty. We use a sensorimotor task to test the hypothesis that subjects are risk-sensitive. Subjects controlled a virtual ball undergoing Brownian motion towards a target. Subjects were required to minimize an explicit cost, in points, that was a combination of the final positional error of the ball and the integrated control cost. By testing subjects on different levels of Brownian motion noise and relative weighting of the position and control cost, we could distinguish between risk-sensitive and risk-neutral control. We show that subjects change their movement strategy pessimistically in the face of increased uncertainty in accord with the predictions of a risk-averse optimal controller. Our results suggest that risk-sensitivity is a fundamental attribute that needs to be incorporated into optimal feedback control models. © 2010 Nagengast et al.