Latest Games :

Technologies powering the grid 2012

lunes, 29 de octubre de 2012 | 0 comentarios


The workhorses of the IBM grid infrastructure
are the grid engines
desktop PCs, workstations or servers
that run the UNIX, Microsoft
Windows or Linux operating systems
. These compute
resources execute various jobs submitted
to the grid, and have access
to a shared set of storage devices.
The IBM Grid Offering for Risk
Management and Compliance
relies on grid middleware from
DataSynapse to create distributed
sets of virtualized resources.
The production-proven, awardwinning
DataSynapse GridServer
application infrastructure platform
extends applications in real time to
operate in a distributed computing
environment across a virtual pool of
underutilized compute resources.
GridServer application interface modules
allow risk management and
compliance applications and nextgeneration
development of risk management
and compliance application
platforms to be grid-enabled.
IBM DB2 Information Integrator
enables companies to have integrated,
real-time access to structured
and unstructured information across
and beyond the enterprise. Critical to
the grid infrastructure, the software
accelerates risk and compliance analytics
applications that process massive
amounts of data for making
better informed decisions. DB2
Information Integrator provides
transparent access to any data
source, regardless of its location,
type or platform.
Continue Reading

UEFI evolution in time

jueves, 25 de octubre de 2012 | 0 comentarios

Consider briefly the path.

In principle, by 2010 we can recognize the existence of two steps in the boot.

1. The first step is called PI (platform initialization), which includes the start of the underlying platform: the first thing that ran was the starting safety protocol, whether it was positive they proceeded in sequence to start the processor, then the chipset, then the motherboard. Once this was working, they proceeded to load the operating system. Here proposed two ways: loading an operating system transient response, to host applications as web browsers and email clients, and load the operating system of choice, which was previously choosing which device would run the boot (disk, memory card, etc..).

2. After the start, and being raised as the operating system, you may want to interact with this firmware directly. This should be done exclusively by the UEFI interfaces wherever possible. In this section, remember that Windows historically chose not to speak to the BIOS directly, but use the concept of high-speed drivers rather than access the underlying firmware in all modern versions of Windows.

To enter address the underlying issue, say that we can describe, prima facie, four-stroke-cycle main use of a PC:
     on
     Platform Initialization
     Starting the Operating System
     off
Continue Reading

Boot Process

| 0 comentarios

This specifies how hardware invoke startup software or an operating system to start charging. However, the starter no longer depends on an operating system loader as before. UEFI can load multiple operating systems without the need for a charger like NTLDR (Windows) or LILO (Linux). UEFI select the partition with the operating system and loads it from there. For this to happen, both the hardware and the software should be UEFI-compatible.

In the boot process, UEFI has menus that are much more friendly than the old BIOS menus, and allows certain hardware configuration tasks can be done without loading the main operating system. This has nothing to do with the current capabilities of certain machines to load a browser without booting the main OS since they achieve these teams carrying a mini-operating system from the BIOS that allows these tasks. By contrast, with the new specification, not loading a mini-operating system but you select which drivers to load to get it to run a particular application. Thus, the selective loading, may initiate web browsers, email clients, which will start much faster than if you had to throw quite a full OS. These applications are called "missing operating system applications." On the contrary, once the system starts operating and applications are used on a regular basis, these applications are called "applications with this operating system."

To better understand this, we see the following image, which shows the building blocks of a generic or universal PC. In the bottom of the figure, in the lower layer fits the hardware. For the hardware, some portion of it can be accessed from fixed code called firmware, and another portion is accessible directly from the operating system. Here ends what would be called "platform", defined by the dotted line. Then, above this layer is the operating system, and above it, the applications that we use every day:
Continue Reading

What is UEFI?

| 0 comentarios

As I write this, it is carrying out the commercial launch of Windows 8. To accompany this milestone, I decided to write this article versante acceleration technologies to startup and shutdown that is in the new operating system. This paper is a survey and its source websites Intel (r), Microsoft (r) and other technology blogs, and images have been mostly ad-hoc generated for this page, except for Figures 2 and 5.
This paper will attempt to explain how to take advantage of Windows 8 UEFI to achieve very short startup times. UEFI stands for Unified Extensible Firmware Interface. It is a specification developed by Intel, which would replace the old BIOS (Basic Input-Output System). Both the new UEFI BIOS and the aim to make the operating system "understands" the basic hardware, usually handled by a firmware. This new team is starting new components side at the start of the platform (hardware and firmware), which then mounts the OS. Windows 8 supports UEFI boot to achieve very short time.


Continue Reading

miércoles, 24 de octubre de 2012 | 0 comentarios

Improving productivity and efficiency
through a multistage implementation


Financial services firms can take an
existing inefficient infrastructure for
risk management and compliance
and gradually grow it into an integrated,
highly efficient grid system.
As shown in Figure 1, an existing
infrastructure may comprise stove
pipes of legacy applications disparate
islands of applications, tools
and compute and storage resources
with little to no communication among
them. A firm can start by enabling
one application—a simulation application
for credit risk modeling, for
example—to run faster by using grid
middleware to virtualize the compute
and storage resources supporting
that application.
The firm can extend the same solution
to another application, for example,
a simulation application used to
model market risk. Compute and storage
resources for both simulation
applications are virtualized by
extending the layer of grid middleware;
thus both applications can
share processing power, networked
storage and centralized scheduling.
Resiliency is achieved at the application
level through failover built into the
DataSynapse GridServer. If failure
occurs or the need to prioritize particular
analyses arises, one application
can pull unutilized resources that are
supporting the other application. This
process also facilitates communication
and collaboration across functional
areas and applications to provide
a better view of enterprise risk
exposure.
Alternatively, a firm can modernize by
grid-enabling a particular decision
engine. A decision engine, such as
one developed with Fair Isaac’s tools,
can deliver the agility of business
rules and the power of predictive analytic
models while leveraging the
power of the grid to execute decisions
in record time. This approach
guarantees that only the computeintensive
components are gridenabled
while simultaneously migrating
these components to technology
specifically designed for decision
components.
Over time, all applications can
become completely grid-enabled or
can share a common set of gridenabled
decision engines. All compute
and data resources become one
large resource pool for all the applications,
increasing the average utilization
rate of compute resources
from 2 to 50 percent in a heterogeneous
architecture to over 90 percent
in a grid architecture .
Based on priorities and rules,
DataSynapse GridServer automatically
matches application requests
with available resources in the distributed
infrastructure. This real-time brokering
of requests with available
resources enables applications to be
immediately serviced, driving greater
throughput. Application workloads
can be serviced in task units of milliseconds,
thus allowing applications
with run times in seconds to execute
in a mere fraction of a second. This
run-time reduction is crucial as banks
move from online to real-time processing,
which is required for functions
such as credit decisions made
at the point of trade execution.
Additionally, the run time of applications
that require hours to process,
such as end-of-day process and loss
reports on a credit portfolio, can be
reduced to minutes by leveraging this
throughput and resource allocation
strategy.
Continue Reading

| 0 comentarios

IBM Grid Offering for Risk
Management and Compliance


The IBM Grid Offering for Risk
Management and Compliance
provides an efficient, scalable and
standards-based solution to the most
pressing issues facing risk and compliance
managers today. Teaming
with DataSynapse, IBM has created
an integrated technology and service
offering that can help risk managers
implement a grid infrastructure to
more efficiently with the Patriot Act,
the Sarbanes-Oxley Act and NYSE
Rule 92.
The IBM Grid Offering for Risk
Management and Compliance allows
firms to run their existing analytics
applications and tools, whether
custom-built management systems,
best-of-breed commercial applications
or a combination thereof. The
offering’s open, flexible infrastructure
supports a wide range of packaged
and custom analytical applications,
including software from Algorithmics,
Fair Isaac, SunGard, SAS, Moody’s
KMV and many more.
The IBM offering centers around grid
computing, an architectural approach
that enables distributed computing
over the Internet, an intranet, a virtual
private network or some combination
thereof. This approach can help
aggregate disparate IT elements
such as compute resources, data
storage and filing systems to create a
single, unified system and to address
fluctuating application workload
requirements. One clear advantage of
a distributed environment for risk
management operations is the ability
to run several risk scenarios in parallel
to generate an optimal solution as
quickly as possible, without sacrificing
increased accuracy and
performance.
As required by risk and compliance
managers’ analyses, the grid makes
additional compute capacity available
on a full-time or part-time basis. It
helps banks leverage available,
underutilized compute capacity within
their existing IT infrastructures, thus
helping them to reach end results far
more rapidly than within conventional
computer environments. And compared
to a non-grid solution, the
required compute resources are
fewer and easier to manage
contributing to reduced total cost of
ownership (TCO). The IBM offering
also provides high levels of resiliency
at the application level to help guarantee
workload execution.
The IBM offering provides a costeffective
computing model for
acquiring, deploying and managing
resources. Because the existing infrastructure
that supports the business
of managing risk does not need to be
replaced, firms can leverage their
existing infrastructure and application
investments, optimizing the efficiency
of their risk management business
and applications while migrating to
a higher performance, lower cost,
standards-based infrastructure. In
addition, this grid infrastructure can
be used for a wide array of analytical
functions including trading systems,
fraud detection in retail banking and
credit cards, customer analytics and
segmentation, and more.
Continue Reading

viernes, 5 de octubre de 2012 | 0 comentarios

 Defective 'nasne' Recorder Gets Teardown Treatment

The release of the "nasne" recording server developed by Sony Computer Entertainment Inc (SCE) was suddenly postponed on the day before its planned release date. But, because the postponement was decided at the last minute, it was delivered to some users.

According to SCE, the release was postponed because they found that the nasne's HDD was partially broken. Even though it was shipped after quality checks at a manufacturing plant, an unexpected accident happened when it was being delivered, the company said. In fact, some of the "nasne" servers delivered to users did not operate because of their HDDs. To find out the cause of the failure, we obtained a nasne and started to break it down.

The teardown of the nasne turned out to be quite easy. When we took out four small covers located on the four corners of a side of the nasne, we found a screw under each of the covers. After removing the screws with a cross head screwdriver, we pried open the case by inserting a flat-blade screwdriver, fingernail, etc into a gap on the case.

The inside of the nasne was very simple. It contained only a large main board. The HDD mounted on the main board was printed with "HITACHI." It was the "Z5K500," a 500-Gbyte HDD manufactured by Hitachi Global Storage Technologies Inc (HGST). The thickness, size and rotation speed of the HDD are 7mm, 2.5 inches and 5,400rpm, respectively.

There was no component that cushions impacts applied to the HDD such as a shock-absorbing material. When seen from a side, it looked as if the HDD was floating in midair. Actually, the HDD was mounted on the main board via four small bases that were attached to the back side of the main board with screws.

Therefore, shocks applied to the main board will be certainly transmitted to the HDD. There was no shock-absorbing material used for the main board, either. So, if the nasne is hit or shaken, the vibration will probably be transmitted to the HDD via the main board.

Except for the HDD and the tuner module, the largest component was a semiconductor chip manufactured by Canada-based ViXS Systems Inc. It was the "XCODE 4210" media processor, which not only decodes video data encoded using MPEG 2, H.264, etc but also transcodes video data. Probably, the transcoding function is used when a TV program is recorded on the HDD in the 3x mode.

On the main board, there was also Toshiba Corp's "TC90532XBG" chip for demodulating terrestrial digital broadcasts (ISDB-T) and satellite digital broadcasts (ISDB-S). In addition, it was mounted with four "K4B1G1646G-BCH9" chips manufactured by Samsung Electronics Co Ltd. They seemed to be 1G-bit DDR3 SDRAM chips. Near the Ethernet port, there was the "RTL8211EG" Ethernet transceiver manufactured by Taiwan-based Realtek Semiconductor Corp.



Continue Reading

| 0 comentarios

It seems that new version of Asterisk arrived !

A.D.T (Asterisk Development Team) has announced new version of Asterisk, it will be 1.8.16.0
This release resolves a lot of issues reported by the community.
Thank you!
Some of them are:
* --- AST-2012-012: Resolve AMI User Unauthorized Shell Access through ExternalIVR
(Closes issue ASTERISK-20132. Reported by Zubair Ashraf of IBM X-Force Research)
* --- AST-2012-013: Resolve ACL rules being ignored during calls by some IAX2 peers
(Closes issue ASTERISK-20186. Reported by Alan Frisch)
* --- Handle extremely out of order RFC 2833 DTMF
(Closes issue ASTERISK-18404. Reported by Stephane Chazelas)
* --- Resolve severe memory leak in CEL logging modules.
(Closes issue AST-916. Reported by Thomas Arimont)
* --- Only re-create an SRTP session when needed; respond with correct crypto policy
(Issue ASTERISK-20194. Reported by Nicolo Mazzon)
For a full list of changes in this release, please see the ChangeLog.
Continue Reading

| 0 comentarios

What is Data Mining 2012

Suppose you wanted to optimize a cyclone furnace (an older-type design for burning coal, still in
use in many power plants) for stable high flame temperatures. Stable temperatures are necessary
to ensure cleaner combustion, and less build-up of undesirable slag that may interfere with heat
transfer. Typically, most power plants are equipped with very effective data gathering and
storage technologies, so there are easy ways to extract the data that describe various parameter
settings, as well as flame temperatures, on a minute-by-minute interval.
Traditional methods to approach this task – to optimize combustion to achieve stable flame
temperatures in the presence of different loads, fuel quality, and so on – come down to the
application of a-priori (CFD) models, or more or less trial-and-error parametric testing.


CFD (Computational Fluid Dynamics) modeling
One approach is to use explicit theoretical (first principles) models, to understand (based on
these usually complex and highly nonlinear models) how best to set certain parameters, distribute
airflows, etc. to optimize performance. With an explicit theoretical knowledge (model) of how
exactly various parameters affect flame temperatures, one can use standard computer
optimization algorithms to identify optima, which "in the laboratory" can be expected to
optimize for stable flame temperatures. 
Typically, these methods are used to identify the parameter "boundaries" where to keep certain
input parameters (controlled by operators, or closed loop control systems) to ensure stable
operations. However, in practice, there are numerous obstacles that put limitations on the
applicability, effectiveness, and usefulness of CFD methods to optimize furnace performance "in
vivo", i.e., inside a "real" power plant. 
First, theoretical, a-priori, physical models of furnaces will only model parameters that are
known (consistent with models) to have an influence. If in a particular installation, there are
other specific "noise factors" that effect performance, CFD will not "know about this", nor can
CFD models accommodate various esoteric installation details in a real power plant. 

StatSoft White Paper  July 2012
Second, CFD models can be very complex, and indeed become practically impossible to
optimize because of their complexity. 
So what is often needed is a "simplification" of sorts, or a "proxy-model" ("stand-in") that can
summarize how the parameter inputs such as over fired air (OFA) distribution, primary and
secondary air flows, coal-flow, and so on will affect flame temperatures, and the variability in
flame temperatures. Data mining methods can provide such "proxy models", as will be further
explained later.
Continue Reading

| 0 comentarios

 Why Data Warehousing?
The concept of data warehousing has evolved out of the need for easy access to
a structured store of quality data that can be used for decision making. It is
globally accepted that information is a very powerful asset that can provide
significant benefits to any organization and a competitive advantage in the
business world. Organizations have vast amounts of data but have found it
increasingly difficult to access it and make use of it. This is because it is in
many different formats, exists on many different platforms, and resides in many
different file and database structures developed by different vendors. Thus
organizations have had to write and maintain perhaps hundreds of programs
that are used to extract, prepare, and consolidate data for use by many different
applications for analysis and reporting. Also, decision makers often want to dig
deeper into the data once initial findings are made. This would typically require
modification of the extract programs or development of new ones. This process
is costly, inefficient, and very time consuming. Data warehousing offers a better
approach.
Data warehousing implements the process to access heterogeneous data
sources; clean, filter, and transform the data; and store the data in a structure
that is easy to access, understand, and use. The data is then used for query,
reporting, and data analysis. As such, the access, use, technology, and
performance requirements are completely different from those in a
transaction-oriented operational environment. The volume of data in data
warehousing can be very high, particularly when considering the requirements
for historical data analysis. Data analysis programs are often required to scan
vast amounts of that data, which could result in a negative impact on operational
applications, which are more performance sensitive. Therefore, there is a
requirement to separate the two environments to minimize conflicts and
degradation of performance in the operational environment.
Continue Reading
 
Support : Creating Website | Johny Template | Mas Template
Copyright © 2011. MY BLOG MELL - All Rights Reserved
Template Modify by Creating Website
Proudly powered by Blogger