F5F Stay Refreshed Power Users Networks LTT Forum checks existence and password validity in milliseconds using built-in authentication methods.

LTT Forum checks existence and password validity in milliseconds using built-in authentication methods.

LTT Forum checks existence and password validity in milliseconds using built-in authentication methods.

S
samurzel
Junior Member
4
09-01-2023, 10:34 AM
#1
The database server uses the strongest processor, biggest memory, and quickest SSD available. It efficiently identifies the right user account from thousands of others.
S
samurzel
09-01-2023, 10:34 AM #1

The database server uses the strongest processor, biggest memory, and quickest SSD available. It efficiently identifies the right user account from thousands of others.

G
GoobieBubba
Member
183
09-01-2023, 03:14 PM
#2
Hello! It's great to be here with around 10,000 entries—definitely not too many for the computers to handle.
G
GoobieBubba
09-01-2023, 03:14 PM #2

Hello! It's great to be here with around 10,000 entries—definitely not too many for the computers to handle.

F
Freakiiianyx3
Senior Member
694
09-01-2023, 03:33 PM
#3
It could have been handled with a tailored hash program, yes—it worked for other purposes too.
F
Freakiiianyx3
09-01-2023, 03:33 PM #3

It could have been handled with a tailored hash program, yes—it worked for other purposes too.

T
TheOrangeFTW
Member
199
09-03-2023, 02:59 PM
#4
It reminds me of handling around 20k records in a database. Back then, it was manageable with just a Pentium 3 and minimal processing power. Searching for a specific word across every forum post from a Pentium 3 about 25 years ago would likely take under ten seconds. Big data searches have become much faster now, focusing on shaping the data to fit your needs. I once ran over 100k Excel files in 2008 on a Pentium 4 with just 2GB RAM—still took about a minute, but it worked fine. Excel isn’t close to the most efficient methods for managing large datasets. In 2018, for a school project involving 15 million farming data entries, I used Python and Jupiter Lab. I performed around 70 data manipulations, and my 6700hq 32GB RAM laptop finished everything in about two minutes. I didn’t optimize the process at all.
T
TheOrangeFTW
09-03-2023, 02:59 PM #4

It reminds me of handling around 20k records in a database. Back then, it was manageable with just a Pentium 3 and minimal processing power. Searching for a specific word across every forum post from a Pentium 3 about 25 years ago would likely take under ten seconds. Big data searches have become much faster now, focusing on shaping the data to fit your needs. I once ran over 100k Excel files in 2008 on a Pentium 4 with just 2GB RAM—still took about a minute, but it worked fine. Excel isn’t close to the most efficient methods for managing large datasets. In 2018, for a school project involving 15 million farming data entries, I used Python and Jupiter Lab. I performed around 70 data manipulations, and my 6700hq 32GB RAM laptop finished everything in about two minutes. I didn’t optimize the process at all.

W
WildCandy
Senior Member
675
09-12-2023, 12:22 AM
#5
The process involves two main steps when logging in: first, scanning every username in the database to locate the matching password, then verifying that match. With 1 million accounts, it may seem demanding, but several factors make it feasible. First, checking one username against its password takes only a short time per lookup. Second, contemporary processors can handle around 5 billion cycles each second, which is sufficient to process thousands of usernames rapidly. Third, storing each username as a single byte uses just about 30MB, fitting comfortably in memory or cache. Fourth, if usernames are stored in plain text and sorted (such as alphabetically), you can cut the number of checks in half by inspecting the middle entry and eliminating half of the list. Using this approach would require only about 20 checks to identify a user. Overall, the system running this forum is likely operating efficiently without much strain.
W
WildCandy
09-12-2023, 12:22 AM #5

The process involves two main steps when logging in: first, scanning every username in the database to locate the matching password, then verifying that match. With 1 million accounts, it may seem demanding, but several factors make it feasible. First, checking one username against its password takes only a short time per lookup. Second, contemporary processors can handle around 5 billion cycles each second, which is sufficient to process thousands of usernames rapidly. Third, storing each username as a single byte uses just about 30MB, fitting comfortably in memory or cache. Fourth, if usernames are stored in plain text and sorted (such as alphabetically), you can cut the number of checks in half by inspecting the middle entry and eliminating half of the list. Using this approach would require only about 20 checks to identify a user. Overall, the system running this forum is likely operating efficiently without much strain.

A
AgentDiamond
Member
95
09-12-2023, 06:09 AM
#6
Various techniques enhance search speed and simplicity. It doesn’t have to check every user for a perfect match. Database optimization is important, but indexing, query tuning, and caching are key factors that significantly improve efficiency.
A
AgentDiamond
09-12-2023, 06:09 AM #6

Various techniques enhance search speed and simplicity. It doesn’t have to check every user for a perfect match. Database optimization is important, but indexing, query tuning, and caching are key factors that significantly improve efficiency.

L
LucasandClaus
Senior Member
438
09-12-2023, 07:12 AM
#7
Databases are usually organized to boost speed, so you don’t have to scan through millions of usernames. It works like a phone directory—once you know a name starts with M, you jump to the section for Ms and start there instead of checking every letter. The CPU work isn’t costly; the real challenge is input/output, meaning reading data from storage. To improve this, databases rely heavily on caching, storing frequently accessed info in memory. When perfect conditions exist, the data is already in RAM, and a smart index reduces the search to a tiny space. Thus, matching a username should take only a few milliseconds. After that, you just hash the entered password and compare it with the stored hash for that username. If they match, you’re logged in; otherwise, not.
L
LucasandClaus
09-12-2023, 07:12 AM #7

Databases are usually organized to boost speed, so you don’t have to scan through millions of usernames. It works like a phone directory—once you know a name starts with M, you jump to the section for Ms and start there instead of checking every letter. The CPU work isn’t costly; the real challenge is input/output, meaning reading data from storage. To improve this, databases rely heavily on caching, storing frequently accessed info in memory. When perfect conditions exist, the data is already in RAM, and a smart index reduces the search to a tiny space. Thus, matching a username should take only a few milliseconds. After that, you just hash the entered password and compare it with the stored hash for that username. If they match, you’re logged in; otherwise, not.

G
Greytrem
Junior Member
41
09-12-2023, 08:21 AM
#8
It seems like your message was a bit unclear. Could you rephrase what you meant?
G
Greytrem
09-12-2023, 08:21 AM #8

It seems like your message was a bit unclear. Could you rephrase what you meant?

S
SkorpioElite
Junior Member
2
09-14-2023, 11:27 PM
#9
Yeah, I just wanted to add a bit more context. Most text files today use UTF-8 encoding, meaning characters can take anywhere from one to four bytes. It’s still practical to store them in memory if the database determines the data is accessed frequently enough. Tables are generally not sorted because that would slow down inserts and updates. Instead, indexes are used for speed—like an index made of usernames with pointers. Searching by username would involve checking the index first, then using the pointer to reach the actual record and verify the password. Or the index could store usernames along with hashed passwords, allowing a successful login to be confirmed without accessing the full table right away. Because the index holds far less data than the entire table, it’s much more efficient to keep it in memory.
S
SkorpioElite
09-14-2023, 11:27 PM #9

Yeah, I just wanted to add a bit more context. Most text files today use UTF-8 encoding, meaning characters can take anywhere from one to four bytes. It’s still practical to store them in memory if the database determines the data is accessed frequently enough. Tables are generally not sorted because that would slow down inserts and updates. Instead, indexes are used for speed—like an index made of usernames with pointers. Searching by username would involve checking the index first, then using the pointer to reach the actual record and verify the password. Or the index could store usernames along with hashed passwords, allowing a successful login to be confirmed without accessing the full table right away. Because the index holds far less data than the entire table, it’s much more efficient to keep it in memory.