73 votes5 comments · Access (Desktop Application) » External Data Connectivity · Flag idea as inappropriate… · Admin →
I would vote for this (if I had any votes left).
Of course it does!
Since the dawn of the JET database engine (written by Microsoft as A PART OF THE WINDOWS OS), JET (and now later the ACE Engine) database files have always transacted database reads and updates to the database file by reading and writing WHOLE HARD DRIVE DISK PAGES at a time. NAS storage methods are great, except where they make that impossible, simulating that behavior instead by sending the contents of whole disk pages across the network as a bunch of TCP/IP (or other network protocol) data packets.
This does work, but can NEVER be as efficient as the original design of reading and writing disk pages to a local hard drive. Ergo, it is slower when the data files are on the network.
Sure it is...when importing a text file you can already set a custom delimiter character - just set it to a ; and you're all set.
Why would this be necessary when you can already achieve that with a SQL Statement executed within the macro?
UPDATE MyTable Set Myfield=(whatever you want here) WHERE MyTable.MyKeyField(s)=(whatever the key value ID(s) is/are here);
...or am I not understanding your intent properly here?
My experience with this has been that "background copy" apps either grab an exclusive lock on the database file before they can copy it, or they fail and don't start a copy (this simultaneously prevents the database file from being opened or from "saving changes" to form/report/code/macro objects by way of existing file sharing mechanics). If you have a background/shallow-copy app or service that does not do things this way (or is mal-configured), then it is the flaw of that app and its developers or administrators in your enterprise...not the fault of MS-Access or the ACE engine.
We need better database file formats, pure and simple. Whether it is because we really need:
1) Bigger FILE BASED databases (many enterprise users are restricted completely from being able to install even free local SQL Server instances, so your "go with a bigger, better server-type database for your back-end" is a complete non-starter for those departmental-level users who may have small(-ish) or transient needs for dealing with larger data files - without going through the expensive, time-consuming headaches of requesting actual server instances and sufficient space from their IT bureaucracy (never mind the whole budget-cycle conversation, too). So, there is still a need here!
2) My other previous requests for better security - enforced at the database engine level - including IT/data governance issues like: "do you have permission to create a new database?", "do you have permission to import or export this data?", "are you allowed to copy/paste to this table/field?"
So, with other reasons like those (and many more other fairly good reasons to improve the database files/formats), adding in a request like mine for allowing us the ability to perform partial (table-level) database maintenance/space-reclamation/index-rebuilding is a worthwhile addition to the feature list.
Should the Access/ACE Engine team decide to radically remake the database file format for any of the above reasons, we will of course have to go thru the growing pains of getting the new database engine and data files into a fully debugged/stable point. We can all expect the obvious issues and pain points associated with that. However, you forget that we have had issues with database files and formats in recent times as well (especially backwards-compatibility-related issue types), so there really is no escaping this kind of problem anyway. So your objection to this request in order to avoid these issues is really rather pointless, isn't it?
You can fix this yourself by following documented Best Practices.
1) Open the database using the /Decompile command line switch.
2) Compress & repair the database
3) re-open & re-compile the code manually
4) Compress & repair the database
Voila! All phantom breakpoint stops are gone!
Allow me to give you another example from very current personal experience:
GIVEN: You have an "expensive" ODBC Pass-thru query to a corporate data store that can easily run 20-30+ MINUTES _*each and every time*_ you try to run it (hundreds of millions of data rows, poorly indexed for your query purposes as is often possible in "Management reporting environment"-type corporate database scenarios).
Now, your task is to execute this query and report out the results. Simple, right?
So create a new query, and select the qryVeryExpensive from the table selection dialog under the Queries tab to past it into the query designer...
(wait that 20-30+ minutes here for the query to run - JUST so the query designer can read the resulting recordset's schema for the column/field names and data types...)
complete that process (eventually).
Then, select the "Make Table" option to change the query type and enter the target table name. oops - you misspelled it. Click on the SQL view to fix the target table name, and then click back to the designer view...
(NOW YOU AGAIN MUST wait that 20-30+ minutes here for the query to run - JUST so the query designer can read the resulting recordset's schema for the column/field names and data types...)
get the picture here?
There are SO MANY paths to make little mistakes and mis-steps in Access like the above example that can cost you TONS of TIME - at the very least - for no real net gain!
When dealing with ODBC Pass-thru queries, ANY of the query/form/report designers MUST have a means of cache-ing a sufficient enough portion of the FIRST query's results so that subsequent "automatic" re-execution of the query DOES NOT NEED TO HAPPEN!
There needs to be an option to override the cached results and force it to run, if the circumstances dictate - UNDER THE USERS' CONTROL.
@AlanC Yes, of course I have, but in some cases, say when outside forces change the underlying technical conditions under which the application runs and/or was first developed - like Admins moving a SQL Server to the far side of a WAN link post-deployment, for one possible example, there are serious impacts to being able to edit existing objects. It is for occasions like these that I make this request (in can literally mean the difference between being able to maintain an application and not).
But Ben, then they'd have to fully debug, and TEST those methods too!
...and then listen to the "feature requests" to be made about them (looks at Nick67's post) Oh, oops too late...
The actual reasons for the 2GB file size limit on databases has little or nothing to do with the physical memory addressing or segregating data & code spaces in memory at all. Instead, it had everything to do with the limits of the ORIGINAL MS-DOS/WIndows FAT Filesystems (which was originally 2GB max size for a HDD volume). The JET (Blue and Red versions - which were both "property" of the Windows OS development group internally at MS, by the way) Engine's internal architecture was mapped to the same limits. It was simply never re-architected in any way to follow the larger limits of next-gen FAT or NTFS filesystems. Ergo, the 2GB limit persists with us still today due to the 2GB pointer-space limits of the old FAT filesystem; much like an insect stuck in a too-small exoskeleton from its earlier days.
Beyond this basic factoid, the reason it was never addressed was a business decision made long ago by Sr. MS Managers to prioritize SQL Server as the database platform of the future, and "freeze" the capabilities of JET where they were. With the ACE engine, the MS-Access Dev team COULD now, finally decide to do something about this issue, but they likely have the same set of ankle chains on them as they've been forced into those many, many years ago.
So far, nobody has been able to twig the more senior-level managers above them that there are new realities and dynamics in the marketplace today, and larger local file-based data stores than 2GB are becoming increasingly needed - WITHOUT incurring the administrative overhead of an MS SQL Server installation (especially a problem for enterprise customers who have seriously locked-down software installation rights enforced by their Domain Admins and policy settings).
Three cheers to the brave MS-Dev-Team soul who is able to finally take that debate with more-senior-level folks and finally win it for us all!
Please provide more information regarding the strange problems you are experiencing.
Let me try and elaborate, then.
Given a .accdb with, say 7 references.
These references occur in a specific sequence (likely the sequence in which they appear in the references dialog, as experience seems to show).
When MS_Access tries to resolve the libraries upon open, when a reference error fails because of an invalid COM CLSID/missing .dll, the library is marked as broken, and attempts to load the remaining libraries are made with similar logic employed. This is the _first_ source of the problem, but I'll come back to that.
The real problem occurs when a CLSID for _any_ referenced library _after_ the first broken one is attempted (and, depending upon the sequence of the libraries, even "internal" VBA functions like LEFT$() can fall victim to this). Once the search for a CLSID starts, and the search remains fruitless and the first broken reference is found, the search for that CLSID is abandoned, and an error message is thrown (even if that CLSID would be found valid had the search algorithm been able to "skip over" the invalid reference library).
The easiest solution would seem to be to "punt" those bad reference libraries to the "end of the list" as they are initially loaded, so that all the valid references could be registered FIRST, and only then have the bad references "failed". In doing so, the original CLSID de-reference logic would work better, and only those broken references themselves would fail in use (thereby allowing "built-in VBA libraries" to function regardless of reference libraries' specified load sequence, and ending the infamous errors on Left$() being not found).
It is really the same problem for ALL broken references - any break in the COM references list for early binding can cause ANY code references in any subsequent (even internal VBA functions like LEFT$() & RIGHT$() to fail because some OTHER component is missing or has a significant registry issue). The COM CLSID/Name lookup kind of quits looking with no "skip over the one with a problem and keep looking in the other libraries" type of error handling.
You are missing the concept.
You appear to understand only a part of the automation picture.
Someone: "User A"
starts a secured MS-Access application under their credentials on a PC, logs it in under their permissions and rights (including their access to the back-end data tables).
Someone else - in a variety of possible real-world scenarios - comes along and uses COM API calls to grab hold of the RUNNING instance of MS-Access in memory (think: a hostile vbscript virus/malware). This script then uses the MS-Access COM API calls to do whatever it is the creator/user("User B" here) want it to do. When it runs, it runs under the cover of "User A"'s login credentials and security rights.
This is entirely possible, and there is no real defense against it. There is no built-in/"standard" means of blocking or otherwise defeating the "hostile" code in this scenario, and it can do whatever damage to the back-end tables that the legitimate user has permissions to do.
This is the reason for this request.
@Anders is exactly correct. This security hole is ripe for unlimited abuse. Get someone to start and log in to your application. I can then grab the handle via vbscript and read EVERYTHING your app can read from the FE and BE databases it uses - UNLESS you program the app VERY defensively - which is HIGHLY uncommon. @Nick67...even your SQL Server BE is completely vulnerable to this. Whatever the App has permissions for and access to at any given time, can be exploited through the COM Automation security hole. Worse, malicious code could damage as well as read your SQL database, assuming your app has those permissions.
That exception is the generic one, but one source for it is network/Active Directory policy settings that restrict scripting functionality form running. Unfortunately in those circumstances the generic error is still the error thrown (making diagnosing the issue unnecessarily difficult).
If they do make Access(/DAO?) a "better ODBC client" I would hope that they also find a way to re-create the capabilities of ADBDB disconnected (in-memory only) recordsets - including chaptered/hierarchical recordsets. I know - "why re-invent the ADODB wheel?" right? Well, because they're basically throwing the old ADBDB wheel away, and those libraries, as useful as they are/were, are simply going the way of the Dodo over time. So, if increased ODBC capabilities are to be considered, then this set of feature requirements deserves a seat at the negotiating table as well.
Also - though authoring .ADPs is no longer supported, apparently running them still is. However, this becomes increasingly problematic as the move on the SQL Server side as away from both the old MSDASQL driver and the newer SQL Native Client drivers in favor of the newer-generation ODBC Drivers. I just rant into a situation where an Access 2010 .ADP was needing to be bmigrated form its SQL2005 back-end and we attempted to simply attach the .ADP to a newer SQL 2014 Server in direct replacement. The results were less than spectacular...it went from about 5-10 seconds to open and start the application to 45+ seconds to simply OPEN the database (even longer to auto-start the application). The change was NOT an issue with VBA code that directly opened ADODB connections to the SQL Server and executed data handling issues. The primary issue arose when using the built-in MS-Access 2010 SERVER options to relink the source database form the old SQL2005 server to the new SQL2014 server. This is a grossly insufficient level of performance for newer technology verses the older server technology!
I have not experimented - and this is just a thought...but if the compiled database had an un-compiled add-in database, could the (compiled) code do the dynamic forms build/load work from in that database container? ...and could the add-in database successfully use the database objects in the compiled db (or another add-in?) as bindable data sources?
this was suggested in a previous topic - I suggested the best compromise was to modify the now() function to take an optional parameter for timezone (to unclude UTCZulu as an option)