Wednesday, November 12, 2014

Computer: Information Technology (IT) Development History in Nepal

Year Activity 
1971: Introduction of computer for census (IBM1401)

1974: 
Establishment of Electronic Data Processing Centre. Now merged with the National Computer Centre, for promoting computer usage and computer literacy

1982: 
First Private Overseas Investment in software development by establishing company for  export, Data Systems International (p) LTD

1985: Distribution of Personal Computers in Nepal

1992: Establishment of Computer Association of Nepal

1996: Establishment of the Ministry of Science, Technology & Environment

1995: Establishment of Mercantile Communications Pvt. Ltd.

1998: Establishment of Nepal Telecommunications Authority (NTA)

2000: Announcement of the first IT policy, “IT Policy 2000”

2001: Establishment of National Information Technology Center

2004: Enactment of Electronics Transaction Act

2004: Telecommunication Policy 2060

2008: Establishment of Government Integrated Data Center

2008: Publication of National Standard Code for Information Interchange.

2010: Promulgation of IT Policy 2010

2012: Establishment of  Department of Information Technology

2014: Disaster Recovery Center  is being constructed in Hetauda 

Monday, May 19, 2014

Computer: What is principle of performance and scalability in Computer Architecture?

Computer Architecture have the good performance of computer system. It is implementing concurrency can enhance the performance. The concept of concurrency can be implemented as parallelism or multiple processors with a computer system. The computer performance is measured by the total time needed to execute application program. Another factor that affects the performance is the speed of memory. That is reason the current technology processor is having their own cache memory. Scalability is required in case of multiprocessor to have good performance.

The scalability means that as the cost of multiprocessor increase, the performance should also increase in proportion. The size access time and speed of memories and buses play a major role in the performance of the system.

Friday, May 9, 2014

Computer: Difference between POP3 and IMAP Mail Server

POP3
IMAP
Since email needs to be downloaded into desktop PC before being displayed, may have the following problems for POP3 access:
  • Need to download all email again when using another desktop PC to check email.
  • May get confused if you need to check email both in the office and at home.
The downloaded email may be deleted from the server depending on the setting of email client.
Since email is kept on server, it would gain the following benefits for IMAP access:
  • No need to download all email when using other desktop PC to check your email.
  • Easier to identify the unread email.
All messages as well as their attachments will be downloaded into desktop PC during the 'check new email' process.A whole message will be downloaded only when it is opened for display from its content.
Mailboxes can only be created on desktop PC. There is only one mailbox (INBOX) exists on the server.Multiple mailboxes can be created on the desktop PC as well as on the server.
Filters can transfer incoming/outgoing messages only to local mailboxes.Filters can transfer incoming/outgoing messages to other mailboxes no matter where the mailboxes locate (on the server or the PC).
Outgoing email is stored only locally on the desktop PC.Outgoing email can be filtered to a mailbox on server for accessibility from other machine.
Messages are deleted on the desktop PC. Comparatively, it is inconvenient to clean up your mailbox on the server.Messages can be deleted directly on the server to make it more convenient to clean up your mailbox on the server.
Messages may be reloaded onto desktop PC several times due to the corruption of system files.The occurrence of reloading messages from the server to PC is much less when compared to POP3.

Monday, April 28, 2014

Computer (C 3.2): The functions and purposes of translators

The functions and purposes of translators


  • Machine code is the set of instructions or operations, in binary, that a computer's processor uses.
  • Assembly language is a set of mnemonics that match machine code instructions.
  • An assembler is software which translates an assembly language program into machine code.

    • An assembler looks up an assembly language mnemonic in a table and reads off the matching machine code instruction.

  • Interpretation involves software that reads one source code instruction, interprets it and executes it before moving onto the next instruction.
  • Compilation involves software that reads a complete source code program, analyses it and produces object code. The object code is executed without reference to the source code, at a later time (or even on a different computer).

    • During lexical analysis, the source code is checked and turned into a stream of tokens. Errors in the use of the language (such as misspelled keywords or incorrectly formed identifiers) are reported.
    • During syntax analysis, the output from the lexical analyser is checked against the grammar of the programming language. Errors in the use of the language (such as missing keywords) are reported.
    • In the code generation phase, the output from the syntax analyser is turned into optimised object code.
    • Optimisation tries to improve the code so that it takes as little processing time and memory as possible.

  • A library routine is a precompiled, self-contained piece of code that can be used in the development of other programs.
  • A loader loads modules (including library routines) into memory and sorts out memory allocations.
  • A linker links modules together by making sure that references from one module to another are correct.
  • Errors recognised during compilation (by the syntax and lexical analyser) are reported to the programmer, who must fix them before recompiling.

Computer (C 3.1): The functions of operating systems

The functions of operating systems

  • The main aims and features of an operating system are to manage the resources of the computer system:

    • processor management – for multiprogramming, the low-level scheduler must decide which job will get the next use of the processor
    • decide on appropriate scheduling algorithms
    • file management – maintaining a list of files, directories and which file allocation units belong to which files
    • input/output management – control of all input and output devices attached to the computer
    • memory management – using strategies such as segmentation and paging and the high-level scheduler to decide which jobs will be loaded next.

  • Peripherals and processes that want to use the processor send an interrupt:

    • a program interrupt signals an error in a program.
    • an I/O interrupt signals that a data transfer is complete or an error has occurred.
    • a timer interrupt signals that a time-critical activity needs attention.
    • a hardware error signals that some device has failed.

  • After the processor finishes executing an instruction, it checks the priority queue of interrupts.
  • High-priority interrupts are serviced before the processor continues with the next instruction.
  • Every interrupt signal has its own interrupt service routine (ISR) that services the interrupt.
  • The state of registers and memory is stored before an ISR runs in order that the interrupted process can resume from the same point.
  • Scheduling tries to ensure that the processor is working to its full potential:

    • that the processor is not idle, waiting for I/O
    • that I/O-bound processes do not wait for the processor when they only need to use it a little
    • that processor-bound processes do not block other processes.

  • The scheduler has a choice of strategy for deciding which job gets the use of the processor next:

    • shortest job first
    • round robin
    • shortest remaining time.

  • Jobs must be loaded into the computer’s main memory to use the processor.
  • Each job must be protected from the actions of other jobs.
  • Linker and loader software load and keep track of processes and their data.
  • A process and its data can be allocated to fixed-size pages or it can be logically segmented.
  • Virtual memory is a small amount of fast access storage between the disk and the memory.
  • Virtual memory stores code and data relating to the current process that cannot be stored in main memory.
  • Spooling is a way to ensure that the input and output for different jobs do not become mixed up, and allows several uses (say on a network) to produce output/printout at the same time.
  • When a file is sent to a busy printer, a reference to the file is added to the printer’s spool queue and it is processed when it reaches the top of the queue.
  • A typical desktop PC operating system includes:

    • file management to allow users to create a hierarchical structure for storing files and to copy, delete and move files
    • multi-tasking to allow the user to run several programs
    • a boot process to check the computer every time it is switched on and load the OS
    • a file allocation table (FAT) to point to the blocks on disk that are used by files.

Preeti to Unicode Converter