Boulder Computer Repair

Computer Physicians loves Boulder! We are glad to be your full time Computer company in Boulder, CO. We have been in business since 1999. Our office is close by Boulder servicing Boulder regularly. Call us for a appointment in Boulder Colorado. Providing Computer Repair, upgrades, sales, installations, troubleshooting, networking, internet help, Virus removal, and training.

Computer Networks in Longmont Denver Erie Colorado Computer Physicians

Networking is one of the jobs that Longmont Computer Physicians, LLC does to help it’s clients.  Sometimes it is wireless networks, other times the client wants a wired computer network.

I needed to hard wire an entire house with CAT5e cabling for a client a few months ago for internet and file sharing access.   It was a great success!  8 rooms in the house had access to a network cable for computers.

Here are some pictures of the job of the patch cables and routers running into the house and through the walls.

Computer networking in Denver Boulder Colorado router and CAT 5e cable PC repair

Computer Networking in Boulder Longmont Denver Erie Colorado PC Repair

PC Computer Networking in Longmont, Boulder, Denver, Erie Colorado

Computer Repair Windows update in Longmont, Boulder, CO

Our Longmont Computer Physicians, LLC office computer had an interesting issue recently I thought I would share:

After an automatic installing of windows 10 update for Valentine’s Day Feb 14, 2018 (KB4074588) my USB keyboard on my desktop computer would no longer work. I tried 3 different USB keyboards  – none worked.  So I went into device manager to uninstall, reinstall, and update the keyboard drivers.  That did not work. So then I uninstalled the windows update.  This fixed the problem, but the update would try to install again the next time I reboot.   So I set the windows update to never install hardware drivers during the update in (system properties) I would need to choose what driver update I want manually from now on.

Computer Physicians provides PC computer networking, repair, Data Recovery, training and virus removal  in Longmont, Boulder, Denver, Erie Colorado and the Colorado Front Range

Boulder/Longmont Computer Repair – PC with no hard drive used

Longmont Colorado PC Computer not using it’s hard drive:

Computer Physicians, LLC  just worked on a unusual situation on a Zotac mini PC computer in Longmont, CO that had a boot windows drive that was filled up.  I thought this would be good to share with my readers:

This very small Zotac mini PC computer running Windows 10 home with 4GB of RAM was booting to a 64GB memory chip located on the motherboard and was not using the 300GB internal SATA hard drive.  As a result since the Windows OS was on a small 64GB memory chip it quickly got filled to capacity.  I backed up the customer’s data to an external hard drive.   The internal hard drive was not being used except for the storing of a few small files.   I could not clone the 64GB memory chip but was able to transfer the OS using special disk software.  I then needed to go into the BIOS and set the boot drive to the internal drive.  The computer is running  slower now since it is not using the small 64GB memory chip for windows and the CPU and computer itself is an inexpensive under-powered computer which was designed to run on the 64GB memory chip. The problem with this design is that the 64GB memory chip quickly gets filled to capacity.  (Windows 10 uses a lot of hard drive memory most systems have 1000GB or more)

I do not like this design and would not recommend this Zotac computer to a client.

The computer will run faster if the original drive is replaced with a solid state drive and if the OS can be transferred and if more RAM memory is installed.

These are some of the situations that Computer Physicians, LLC runs into.

-Steve

Longmont’s Computer Physicians Computer Service and Repair in Longmont Colorado

Computer Physicians, LLC is a computer service company in Longmont, CO in business since 1999.

Longmont Computer Repair Data Recovery in Boulder Erie Denver Colorado Networking PC services help virus removal training

We provide computer repair and other services onsite at your location for same day service or in our workshop for the lowest cost in the area.

We also provide: Computer training, tutoring, help, upgrades, computer systems, rentals, sales, troubleshooting, performance improvement, cyber security, virus removal, networking, website development and hosting, internet setup, router and switch install and we can use our 1gbps upload and download internet service connection at our office for any fast internet needs you have.  We are experts at Data Recovery of lost data and PC system crash recovery. We also develop, program and create Song Director and NameBase database software.

Computer Physicians services the entire Colorado front range. Our main technician and president is CompTia A+, MCP, MTA Microsoft certified professional with many college degrees in computers.

Call us today for any of your computer needs.

Longmont’s Newest Computer Viruses – Longmont/Boulder CO – Computer Physicians

Computer Repair Longmont, CO Virus removal. – Computer Physicians, LLC

Here is some news about the latest computer viruses out today that Computer Physicians in Longmont/Boulder, CO can help you with:

Technewsworld:

A new ransomware exploit dubbed “Petya” struck major companies and infrastructure sites this July 2017, following last month’s WannaCry ransomware attack, which wreaked havoc on more than 300,000 computers across the globe. Petya is believed to be linked to the same set of hacking tools as WannaCry.

Petya already has taken thousands of computers hostage, impacting companies and installations ranging from Ukraine to the U.S. to India. It has impacted a Ukrainian international airport, and multinational shipping, legal and advertising firms. It has led to the shutdown of radiation monitoring systems at the Chernobyl nuclear facility.

(more…)

Trends in PC technology – Computer Physicians Longmont/Boulder/Erie, CO

 https://www.computer-physicians.com/
Computer repair data recovery networking virus removal in Longmont/Boulder/Denver Colorado

 Here is a good article which talks about the changes in PC technology and the trends.

Past, Present and Future Trends in the Use
of Computers in Fisheries Research By
Bernard A. Megrey and Erlend Moksness
1.2 Hardware Advances
It is difficult not to marvel at how quickly computer technology advances. The
current typical desktop or laptop computer, compared to the original mono-
chrome 8 KB random access memory (RAM), 4 MHz 8088 microcomputer or
the original Apple II, has improved several orders of magnitude in many areas.
The most notable of these hardware advances are processing capability,
color graphics resolution and display technology, hard disk storage, and the
amount of RAM. The most remarkable thing is that since 1982, the cost of a
high-end microcomputer system has remained in the neighborhood of $US
3,000. This statement was true in 1982, at the printing of the last edition of
this book in 1996, and it holds true today.
1.2.1 CPUs and RAM
While we can recognize that computer technology changes quickly, this state-
ment does not seem to adequately describe what sometimes seems to be the
breakneck pace of improvements in the heart of any electronic computing
engine, the central processing unit (CPU). The transistor, invented at Bell
Labs in 1947, is the fundamental electronic component of the CPU chip. Higher
performance CPUs require more logic circuitry, and this is reflected in steadily
rising transistor densities. Simply put, the number of transistors in a CPU is a
rough measure of its computational power which is usually measured in floating
point mathematical operations per second (FLOPS). The more transistors there
are in the CPU, or silicon engine, the more work it can do.
Trends in transistor density over time, reveal that density typically doubles
approximately every year and a half according to a well know axiom known as
Moore’s Law. This proposition, suggested by Intel co-founder Gordon Moore
(Moore 1965), was part observation and part marketing prophesy. In 1965
Moore, then director of R&D at Fairchild Semiconductor, the first large-scale
producer of commercial integrated circuits, wrote an internal paper in which he
drew a line though five points representing the number of components per
integrated circuit for minimum cost for the components developed between
1959 and 1964
The prediction arising
from this observation became a self-fulfilling prophecy that emerged as one of
the driving principals of the semiconductor industry. As it related to computer
CPUs (one type of integrated circuit), Moore’s Law states that the number of
transistors packed into a CPU doubles every 18–24 months.
Figure 1.1 supports this claim. In 1979, the 8088 CPU had 29,000 transistors.
In 1997, the Pentium II had 7.5 million transistors, in 2000 the Pentium 4 had
420 million, and the trend continues so that in 2007, the Dual-Core Itanium 2
processor has 1.7 billion transistors. In addition to transistor density, data
1 Past, Present and Future Trends in the Use of Computers
) of CPU
performance. Note y-axis is on the log scale (Source: http://en.wikipedia.org/wiki/Teraflop,
accessed 12 January 2008)
1 Past, Present and Future Trends in the Use of Computers
5
Manufacturing technology appears to be reaching its limits in terms of how
dense silicon chips can be manufactured – in other words, how many transistors
can fit onto CPU chips and how fast their internal clocks can be run. As stated
recently in the BBC News, ‘‘The industry now believes that we are approaching
the limits of what classical technology – cla
ssical being as refined over the last 40
years – can do.’’ There is a problem with making microprocessor
circuitry smaller. Power leaks, the unwan
ted leakage of electricity or electrons
between circuits packed ever closer toget
her, take place. Overheating becomes a
problem as processor architecture gets ever smaller and clock speeds increase.
Traditional processors have one processing engine on a chip. One method
used to increase performance through higher transistor densities, without
increasing clock speed, is to put more than one CPU on a chip and to allow
them to independently operate on different tasks (called threads). These
advanced chips are called multiple-core processors. A dual-core processor
squeezes two CPU engines onto a single chip. Quad-core processors have four
engines. Multiple-core chips are all 64-bit meaning that they can work through
64 bits of data per instruction. That is twice rate of the current standard 32-bit
processor. A dual-core processor theoretically doubles your computing power
since a dual-core processor can handle two threads of data simultaneously. The
result is there is less waiting for tasks to complete. A quad-core chip can handle
four threads of data.
Progress marches on. Intel announced in February 2007 that it had a
prototype CPU that contains 80 processor cores and is capable of 1 teraflop
(10
12
floating point operations per second) of processing capacity. The potential
uses of a desktop fingernail-sized 80-core chip with supercomputer-like perfor-
mance will open unimaginable opportunities (Source: http://www.intel.com/
pressroom/archive/releases/20070204comp.htm, accessed 12 January 2008).
As if multiple core CPUs were not powerful enough, new products being
developed will feature ‘‘dynamically scalable’’ architecture, meaning that vir-
tually every part of the processor – including cores, cache, threads, interfaces,
and power – can be dynamically allocated based on performance, power and
thermal requirements.
Supercomputers may
soon be the same size as a laptop if IBM brings to the market silicon nanopho-
tonics. In this new technology, wires on a chip are replaced with pulses of light
on tiny optical fibers for quicker and more power-efficient data transfers
between processor cores on a chip. This new technology is about 100 times
faster, consumes one-tenth as much power, and generates less heat (
Multi-core processors pack a lot of power. There is just one problem: most
software programs are lagging behind hardware improvements. To get the most
out of a 64-bit processor, you need an operating system and application
programs that support it. Unfortunately, as of the time of this writing, most
software applications and operating systems are not written to take advantage
of the power made available with multiple cores. Slowly this will change.
Currently there are 64-bit versions of Linux, Solaris, and Windows XP, and
Vista. However, 64-bit versions of most device drivers are not available, so for
today’s uses, a 64-bit operating system can become frustrating due to a lack of
available drivers.
Another current developing trend is building high performance computing
environments using computer clusters, which are groups of loosely coupled
computers, typically connected together through fast local area networks.
A cluster works together so that multiple processors can be used as though
they are a single computer. Clusters are usually deployed to improve perfor-
mance over that provided by a single computer, while typically being much less
expensive than single computers of comparable speed or availability.
Beowulf is a design for high-performance parallel computing clusters using
inexpensive personal computer hardware. It was originally developed by
NASA’s Thomas Sterling and Donald Becker. The name comes from the
main character in the Old English epic poem Beowulf.
A Beowulf cluster of workstations is a group of usually identical PC com-
puters, configured into a multi-computer architecture, running a Open Source
Unix-like operating system, such as BSD or
Solaris They are joined into a small network and have libraries and
programs installed that allow processing to be shared among them. The server
node controls the whole cluster and serves files to the client nodes. It is also the
cluster’s console and gateway to the outside world. Large Beowulf machines
might have more than one server node, and possibly other nodes dedicated to
particular tasks, for example consoles or monitoring stations. Nodes are con-
figured and controlled by the server node, and do only what they are told to do
in a disk-less client configuration.
There is no particular piece of software that defines a cluster as a Beowulf.
Commonly used parallel processing libraries include Message Passing Interface;
(Both of these permit the programmer to divide a task among a group of
networked computers, and recollect the results of processing. Software must
be revised to take advantage of the cluster. Specifically, it must be capable of
performing multiple independent parallel operations that can be distributed
among the available processors. Microsoft also distributes a Windows Compute
Cluster Server 2003 (Source: http://www.microsoft.com/windowsserver2003/ccs/
default.aspx, accessed 12 January 2008) to facilitate building a high-performance
computing resource based on Microsoft’s Windows platforms.
One of the main differences between Beowulf and a cluster of workstations is
that Beowulf behaves more like a single machine rather than many worksta-
tions.
Past, Present and Future Trends in the Use of Computers
CPU + memory package which can be plugged into the
cluster, just like a CPU or memory module can be plugged into a motherboard.
(Source: http://en.wikipedia.org/wiki/Beowulf_(computing), accessed 12 January
2008). Beowulf systems are now deployed worldwide, chiefly in support of
scientific computing and their use in fisheries applications is increasing. Typical
configurations consist of multiple machines built on AMD’s Opteron 64-bit
and/or Athlon X2 64-bit processors.
Memory is the most readily accessible large-volume storage available to the
CPU. We expect that standard RAM configurations will continue to increase as
operating systems and application software become more full-featured and
demanding of RAM. For example, the ‘‘recommended’’ configuration for
Windows Vista Home Premium Edition and Apple’s new Leopard operating
systems is 2 GB of RAM, 1 GB to hold the operating system leaving 1 GB for
data and application code. In the previous edition, we predicted that in 3–5
years (1999–2001) 64–256 megabytes (MB) of Dynamic RAM will be available
and machines with 64 MB of RAM will be typical. This prediction was incred-
ibly inaccurate. Over the years, advances in semiconductor fabrication technol-
ogy have made gigabyte memory configurations not only a reality, but
commonplace.
Not all RAM performs equally. Newer types, called double data rate RAM
(DDR) decrease the time in takes for the CPU to communicate with memory,
thus speeding up computer execution. DDR comes in several flavors. DDR has
been around since 2000 and is sometimes called DDR1. DDR2 was introduced
in 2003. It took a while for DDR2 to reach widespread use, but you can find it in
most new computers today. DDR3 began appearing in mid-2007. RAM simply
holds data for the processor. However, there is a cache between the processor
and the RAM: the L2 cache. The processor sends data to this cache. When the
cache overflows, data are sent to the RAM. The RAM sends data back to the L2
cache when the processor needs it. DDR RAM transfers data twice per clock
cycle. The clock rate, measured in cycles per second, or hertz, is the rate at which
operations are performed. DDR clock speeds range between 200 MHz (DDR-
200) and 400 MHz (DDR-400). DDR-200 transfers 1,600 megabits per second
(Mb s) while DDR-400 transfers 3,200 MB s

DDR2 RAM is
twice as fast as DDR RAM. The bus carrying data to DDR2 memory is twice as
fast. That means twice as much data are carried to the module for each clock
cycle. DDR2 RAM also consumes less power than DDR RAM. DDR2 speeds
range between 400 MHz (DDR2-400) and 800 MHz (DDR2-800). DDR2-400
transfers 3,200 MB s

1
. DDR2-800 transfers 6,400 MB s

1
.DDR3RAM
is twice as fast as DDR2 RAM, at least in theory. DDR3 RAM is more power-
efficient than DDR2 RAM. DDR3 speeds range between 800 MHz (DDR3-800)
and 1,600 MHz (DDR3-1600). DDR3-800 transfers 6,400 MB s

1
;DDR3-1600
transfers 12,800 MB s

1
.
As processors increased in performance, the addressable memory space also
increased as the chips evolved from 8-bit to 64-bit. Bytes of data readily
8
B.A. Megrey and E. Moksness
accessible to the processor are identified by a memory address, which by
convention starts at zero and ranges to the upper limit addressable by the pro-
cessor. A 32-bit processor typically uses memory addresses that are 32 bits wide.
The 32-bit wide address allows the processor to address 2
32
bytes (B) of memory,
which is exactly 4,294,967,296 B, or 4 GB. Desktop machines with a gigabyte of
memory are common, and boxes configured with 4 GB of physical memory are
easily available. While 4 GB may seem like a lot of memory, many scientific
databases have indices that are larger. A 64-bit wide address theoretically allows
18 million terabytes of addressable memory (1.8 10
19
B). Realistically 64-bit
systems will typically access approximately 64 GB of memory in the next 5 years.
1.2.2 Hard Disks and Other Storage Media
Improvements in hard disk storage, since our last edition, have advanced as well.
One of the most amazing things about hard disks is that they both change and
don’t change more than most other components. The basic design of today’s
hard disks is not very different from the original 5¼’’ 10 MB hard disk that was
installed in the first IBM PC/XTs in the early 1980s. However, in terms of
capacity, storage, reliability and other characteristics, hard drives have substan-
tially improved, perhaps more than any other PC component behind the CPU.
Seagate, a major hard drive manufacturer, estimates that drive capacity increases
by roughly 60% per year (Source: http://news.zdnet.co.uk/communications/
0,100,0000085,2067661,00.htm, accessed 12 January 2008).
Some of the trends in various important hard disk characteristics (Source:
http://www.PCGuide.com, accessed 12 January 2008) are described below. The
areal density of data on hard disk platters continues to increase at an amazing
rate even exceeding some of the optimistic predictions of a few years ago.
Densities are now approaching 100 Gbits in

2
, and modern disks are now packing
as much as 75 GB of data onto a single 3.5 in platter (Source: http://www.
fujitsu.com/downloads/MAG/vol42-1/paper08.pdf, accessed 12 January 2008).
Hard disk capacity continues to not only increase, but increase at an accelerat-
ing rate. The rate of technology development, measured in data areal density
growth is about twice that of Moore’s law for semiconductor transistor
density (Source: http://www.tomcoughlin.com/Techpapers/head&medium.pdf,
accessed 12 January 2008).
The trend towards larger and larger capacity drives will continue for both
desktops and laptops. We have progressed from 10 MB in 1981 to well over
10 GB in 2000. Multiple terabyte (1,000 GB) drives are already available. Today
the standard for most off the shelf laptops is around 120–160 GB. There is also a
move to faster and faster spindle speeds. Since increasing the spindle speed
improves both random-access and sequential performance, this is likely to
continue. Once the domain of high-end SCSI drives (Small Computer System
Interface), 7,200 RPM spindles are now standard on mainstream desktop and
1 Past, Present and Future Trends in the Use of Computers
9
notebook hard drives, and a 10,000 and 15,000 RPM models are beginning to
appear. The trend in size or form factor is downward: to smaller and smaller
drives. 5.25 in drives have now all but disappeared from the mainstream PC
market, with 3.5 in drives dominating the desktop and server segment. In the
mobile world, 2.5 in drives are the standard with smaller sizes becoming more
prevalent. IBM in 1999 announced its
Microdrive
which is a tiny 1 GB or device
only an inch in diameter and less than 0.25 in thick. It can hold the equivalent of
700 floppy disks in a package as small as 24.2 mm in diameter. Desktop and
server drives have transitioned to the 2.5 in form factor as well, where they are
used widely in network devices such as storage hubs and routers, blade servers,
small form factor network servers and RAID (Redundant Arrays of Inexpen-
sive Disks) subsystems. Small 2.5 in form factor (i.e. ‘‘portable’’) high perfor-
mance hard disks, with capacities around 250 GB, and using the USB 2.0
interface are becoming common and easily affordable. The primary reasons
for this ‘‘shrinking trend’’ include the enhanced rigidity of smaller platters.
Reduction in platter mass enables faster spin speeds and improved reliability
due to enhanced ease of manufacturing. Both positioning and transfer perfor-
mance factors are improving. The speed with which data can be pulled from the
disk is increasing more rapidly than positioning performance is improving,
suggesting that over the next few years addressing seek time and latency will
be the areas of greatest attention to hard disk engineers. The reliability of hard
disks is improving slowly as manufacturers refine their processes and add new
reliability-enhancing features, but this characteristic is not changing nearly as
rapidly as the others above. One reason is that the technology is constantly
changing, and the performance envelope is constantly being pushed; it’s much
harder to improve the reliability of a product when it is changing rapidly.
Once the province of high-end servers, the use of multiple disk arrays
(RAIDs) to improve performance and reliability is becoming increasingly
common, and multiple hard disks configured as an array are now frequently
seen in consumer desktop machines. Finally, the interface used to deliver data
from a hard disk has improved as well. Despite the introduction to the PC world
of new interfaces such as IEEE-1394 (FireWire) and USB (universal serial bus)
the mainstream interfaces in the PC world are the same as they were through the
1990s: IDE/ATA/SATA and SCSI. These interfaces are all going through
improvements. A new external SATA interface (eSATA) is capable of transfer
rates of 1.5–3.0 Gbits s

1
. USB transfers data at 480 Mbits s

1
and Firewire is
available in 400 and 800 Mbits s

1
. USB 3.0 has been announced and it will
offer speeds up to 4.8 Gbits s

1
. Firewire will also improve to increases in the
range of 3.2 Gbits s

1
. The interfaces will continue to create new and improved
standards with higher data transfer rates to match the increase in performance
of the hard disks themselves.
In summary, since 1996, faster spindle speeds, smaller form factors, multiple
double-sided platters coated with higher density magnetic coatings, and
improved recording and data interface technologies, have substantially
increased hard disk storage and performance. At the same time, the price per unit of storage has decreased.

Longmont Boulder Computer Repair Data Recovery -Video

Longmont Boulder Computer Repair Data Recovery PC service Virus removal.

https://www.computer-physicians.com/ in Longmont, Boulder, Erie, Denver, Colorado. Onsite at your location – we come to you! Onsite, in-shop or remote help.  Video about Computer Physicians:

 

Longmont Boulder Computer Repair PC service Virus removal, Data Recovery https://www.computer-physicians.com/ in Longmont, Boulder, Erie, Denver, Colorado.  Onsite at your location – we come to you! Onsite, in-shop or remote help.

(more…)