A WordPress theme installation error usually occurs due to one of the following reasons: Incorrect File Format that uploading a .zip theme file, not an extracted folder. WordPress only accepts zipped theme files. File Size Limit Some hosting providers limit the maximum upload size. If the theme is large, increase the upload size limit or install via FTP. Missing style.css File Every WordPress theme must contain a style.css file in the root folder. If it’s missing, the theme will fail to install. Theme Conflicts or Duplicates .If the same theme or version already exists, WordPress may block installation. Try deleting the old version first. Server Configuration Issues Sometimes PHP version, memory limits, or permissions cause the error. Updating your PHP or adjusting permissions can help. Uploading to the Wrong Section – Make sure you upload themes under Appearance → Themes → Add New, not under Plugins.
Don't know about Java ( because I have not coded in Java since 1998 ).
However using an Intel i7-1355U (mobile i7 CPU, 1.7Ghz base speed) with 16GB of memory, I was able to sort 200 million DWORDs ( 4 bytes unsigned integer ) in 20.563 secs using Shellsort ( normally slower than Quicksort ). The codes were compiled using Visual Studio's C++.
So your implementation is definitely flawed ( unoptimised somewhere ).
Microsoft would not allow me to sort 300 million records so cannot tell you anything beyond this limitation. Sometimes I could sort 250 million records.
Yeah, just ran into this:
matplotlib 3.10.7
mplleaflet 0.0.5
I encountered the same problem even I used the Xcode26 Xcode26.0.1 Xcode26.1(https://developer.apple.com/forums/thread/805547)
after two days try I fixed this problem by followed the methods in this article(https://medium.com/@sergey-pekar/how-to-fix-annoying-xcode-26-not-building-swiftui-previews-error-cannot-execute-tool-metal-due-49564e20357c)
I think there is still has a bug for install Metal Toolchain at least for those migrated from Xcode26 Beta
This is some concept you can work around without attach and debug around :
Revit AddinManager update .NET assemblies without restart Revit for developer.
4 . Attach to a process is good just in case you are opening Revit and the process is test for sometime need, but it's not high recommend for this case
Cuando se usa un nombre de conexión con caracteres no ASCII, el archivo XML puede quedar, en este caso, indicas que utilizaste un nombre chino en la conexion de base de datos, Renombra las conexiones con nombres simples, Evita usar caracteres no ASCII, nombres sin acentuación, si usas Windows, puedes editar el Edita el archivo Spoon.bat,Antes de ejecutar Java "set JAVA_TOOL_OPTIONS=-Dfile.encoding=UTF-8", guarda y reinicia Spoon esto es para forzar que todos los archivos de tipo XML se lean/escriban con configuración UTF8, espero aue te funione, exitos...
do you using "Activity" Component in your code? if i Rendered 2 Acivitity Component than that error is came out in console.
SAY BLESS ME ON TELEGRAM JUST Click
It just records the cell or sheet context where the named item was originally created, and it does not affect what the name actually refers to. You can safely ignore it when editing the name or value.
if anyone is having the same problem in VS2022, go to the Docker Desktop app → Settings → Resources → WSL Integration. I noticed that I didn’t have any distro installed, so I installed Ubuntu using `wsl --install Ubuntu` and it worked!
As ShaolinShane said, I set different subnet of ACA under the same Vnet of the private link of the ACR.
It worked now.
i did it like that. I Think is better and simple too, here the proposed solution doesn't worked in godot 4.3
foreach (var child in GetChildren())
{
if (child is IState) _states.Add(child as IState);
}
a better solution :
add_action('woocommerce_single_product_summary', function() {
global $product;
if ( ! $product->is_in_stock() ) {
echo '<a href="' . home_url( 'contact' ) . '">Contact us</a>';
}
}, 31);
The greyed-out JAR files issue in STS/Eclipse Data Source Explorer is a common problem.
Try installing the proper database tools:
Go to Help → Eclipse Marketplace
Search for "DTP" (Data Tools Platform) or "Database Development"
Install the Eclipse Data Tools Platform plugins
Restart STS
Try adding drivers again via Data Source Explorer
The newer versions of STS might have issues with the legacy Database Development perspective. so try above steps and let us know whether it is working or not.
Use Procces.Start using as parameter the name of the UWP app ending in a colon
Process.Start("appname:");
Process.Start("ms-calculator:");
`ST_Contains`/`contains` is slow when used row-by-row without a spatial index. Use a geopandas or postgis, and make sure both layers share the same CRS/SRID
import geopandas as gpd
points = points.to_crs(polygons.crs)
result = gpd.sjoin(points, polygons, how="left", predicate="within")
The problem is that you are using the google.generativeai library which is the SDK for Google AI Studio and it is designed to us a simple API key. If you're using ADC then you need to use the Vertex AI SDK, googe-genai. This library is designed to automatically and securely use your environment's built-in service account.
I was able to get the following code to work:
pip install google-genai dotenv
import os
from dotenv import load_dotenv
from google.genai import Client
load_dotenv()
project_id = os.environ.get("PROJECT_ID")
location = os.environ.get("LOCATION")
print(f"[info] project id = {project_id}, location = {location}")
client = Client(
vertexai=True, project=project_id, location=location
)
model_name = "gemini-2.5-flash"
response = client.models.generate_content(
model=model_name,
contents="Hello there.",
)
print(response.text)
client.close()
Close all Visual Studio instances and delete the %LocalAppData%\.IdentityService\SessionTokens.json file works for me
https://developercommunity.visualstudio.com/t/Nuget-Package-Manager-for-Project---Con/10962291
what it did work for me is,
$env:REACT_NATIVE_PACKAGER_HOSTNAME='192.168.1.132'
then
npm start
@and answered this best in 2017, and I think we've all given this person enough to to post what I think is the best answer as an answer. So now I'm doing it, after posting upvote #42 (such a fitting #) to the comment that saved my bacon. But we digress. Combining the well-celebrated answer with @and's golden comment...
You can set the environment variable REQUESTS_CA_BUNDLE so you don't have to modify your code:
export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt
The improvement, if it's not clear is this: /etc/ssl/certs/ca-certificates.crt will contain not merely any self-cert you added to your trust store, but also all of the other standard certs. That's a big deal, because, for example, I ran into a situation where when REQUESTS_CA_BUNDLE was set to just my self-cert, the AWS CLI could no longer authenticate. (Don't ask me why AWS cares about REQUESTS_CA_BUNDLE. I don't know. I do know, however, that using ca-certificates.crt solved the problem.
The error, however vague, was due to me not importing Text from react-native. The Modal call was the culprit. Thanks @sparkJ @Estus Flask. Also great username
Thank you everyone for your comments and answers. I found out that it was VS Code all along, refreshing the same log file to which I was adding data. I think with the latest update VS Code started to do that as I have not see it auto refresh while open.
I added
$wgMainCacheType = CACHE_ACCEL;
$wgSessionCacheType = CACHE_DB;
which did not help then I realised I already had
## Shared memory settings
$wgMainCacheType = CACHE_NONE;
$wgMemCachedServers = array();
CACHE_NONE is recommended above. I tried both ACCEL and NONE.
I deleted cookies. I don't know how to access the database. I still can't log in on one browser (my main browser) but I can see the wiki content.
I used a regular expression to solve this issue. In Polars you signal regular expressions with the start and end symbols ^ and $ and the * needs to be escaped so the full solution looks like
python
import polars as pl
df = pl.DataFrame({"A": [1, 2], "B": [3, None], "*": [4, 5]})
print(df.select(pl.col(r"^\*$"))) # --> prints only the "*" column
There are two locations for this information. In some cases, you might need to look in both places (try primary first. If missed, try alternate).
Primary location:
host.hardware.systemInfo.serialNumber
Alternate location:
host.summary.hardware.otherIdentifyingInfo
In some of my systems, I cannot find the tags in the primary and traversing the alternate location helps find it. But between those two locations, I have always been able to get the tags. It might be a bit tricky to fish the info out. The following code should help.
if host.summary.hardware and host.summary.hardware.otherIdentifyingInfo:
for info in host.summary.hardware.otherIdentifyingInfo:
if info.identifierType and info.identifierType.key:
key = info.identifierType.key.lower()
if "serial" in key or "service" in key or "asset" in key:
if info.identifierValue and info.identifierValue.strip():
serial_number = info.identifierValue
I love you, i have expend a day debbuging the framework.. for this simple thing.. T.T
have you solved this problem, I encountered the same question.
If your looking for a C implementation (usable with C++) that handles UTF-8 quite well and is also very small, you could also have a look here:
How to uppercase/lowercase UTF-8 characters in C++?
These C-functions can be easily wrapped for use with std::string.
I'm not saying this is the most robust way, after all, all the problems with std::string will remain, but it could be helpful in some use cases.
Stop scrolling! Ini yang lagi hype! jo 777 banyak menangnya!
Is there any API can I use with C++ to do this?
No, there is no API to perform this task.
Microsoft's policy is that such tasks must be performed by the user using the provided Windows GUI.
Explaining the use case: If you are doing Data Augmentation, then usually following sequence wiil work,
from tensorflow import keras
from tensorflow.keras import layers
data_augmentation = tf.keras.Sequential([
layers.RandomFlip("horizontal"),
layers.RandomRotation(0.1)
]
)
If it's the imperative approach you're after, the original loop should do just fine.
Probably just my style preference -- I prefer having the logic in the accumulator -- seem more like the imperative solution.
What would git(1) do? Or, What does git(1) do?
! tells it to run your alias in a shell. What shell? It can’t use a
specific one like Ksh or Zsh. It just says “shell”. So let’s try /bin/sh:
#!/bin/sh
mvIntoDIR() {
cd ${GIT_PREFIX:-.};
allArgsButLast="${@:1:$#-1}";
lastArg="${@: -1}";
git mv -v $allArgsButLast $lastArg/;
git commit -uno $allArgsButLast $lastArg -m "Moved $allArgsButLast into $lastArg/"; \
};
mvIntoDIR
We get the same error but a useful line number:
5: Bad substitution
Namely:
allArgsButLast="${@:1:$#-1}";
Okay. But this is a Bash construct. So let’s change it to that:
#!/usr/bin/env
mvIntoDIR() {
cd ${GIT_PREFIX:-.};
allArgsButLast="${@:1:$#-1}";
lastArg="${@: -1}";
git mv -v $allArgsButLast $lastArg/;
git commit -uno $allArgsButLast $lastArg -m "Moved $allArgsButLast into $lastArg/"; \
};
mvIntoDIR
We get a different error:
line 5: $#-1: substring expression < 0
Okay. So git(1) must be running a shell which does not even know what
${@:1:$#-1} means. But Bash at least recognizes that you are trying to use a construct that it knows, even if it is being misused in some way.
Now the script is still faulty. But at least it can be fixed since it is running in the shell intended for it.
I would either ditch the alias in favor of a Bash script or make a Bash script and make a wrapper alias to run it.
If you don't want to map(), you could replace the accumulator with
(map, foo) -> {
map.put(foo.id(), foo);
map.put(foo.symbol(), foo);
}
But at this point it's hard to see how streaming is an improvement over a simple loop. What do you have against map() anyway?
Your order of parentheses are wrong:
train_ds = train_ds.map(lambda x, y: (tf.image.resize(x, size)), y)
Should be:
train_ds = train_ds.map(lambda x, y: (tf.image.resize(x, size), y))
This should be possible, depending on what your data looks like. You may need to use transformations to split the data into multiple series. Do you have an example of your data?
I threw this State Timeline together with the following CSV data, not sure how close it is to what you have:
Start, End, SetID, ModID, ModType
2025-01-01 01:00:00, 2025-01-01 01:15:00, 100, 101, Set
2025-01-01 01:00:00, 2025-01-01 01:01:00, 100, 102, Alpha
2025-01-01 01:02:00, 2025-01-01 01:04:00, 100, 103, Beta
2025-01-01 01:05:00, 2025-01-01 01:25:00, 110, 111, Set
2025-01-01 01:05:00, 2025-01-01 01:08:30, 110, 113, Alpha
2025-01-01 01:07:00, 2025-01-01 01:12:00, 110, 115, Gamma
Transformations:
Format the times correctly
The main one is Partition by values to split the data into separate series based on SetID & ModID
That should get you chart you want, but you'll want to add some settings so the names don't look weird and to color the bars how you like:
Display Name: Module ${__field.labels.ModID} to convert from ModType {ModID="101", SetId="100"}
Value Mappings: Set -> color Yellow , Alpha -> Red, Beta -> Green, etc.
You can use trim() modifier to cut Shape to half
Circle()
.trim(from: 0, to: 0.5)
.frame(width: 200, height: 200)

Learn more from this link.
https://github.com/phoenix-tui/phoenix - High-performance TUI framework for Go with DDD architecture, perfect Unicode, and Elm-inspired design. Modern alternative to Bubbletea/Lipgloss. All function key support from the box!
Maybe your Layers config are not set up in the way you think. If you search for it, you can get a surprise (Project settings -> Physics (or Physics 2D).
For c++ it's this:
-node->get_global_transform().basis.get_column(2); // forward
node->get_global_transform().basis.get_column(0); // right
node->get_global_transform().basis.get_column(1); // up
https://pub.dev/packages/web_cache_clear
I made this package now because i needed it too. It assumes you have a backend where you can update your version number but every time the page loads it will check the session version too the backend version. If its not the same it will clear the cache storage and reload the page.
at integrated terminal or mac os terminal does not matter, just write: su and enter and input pass. After become root install "npm install -g nodemon" It is worked with me with this way.
What is @{n='BusType';e={$disk.bustype}}? AI gave me similar examples, I just barely understand it. Seems n & e as shortcuts for Name & Expression in so called Calculated Property Syntax.
@{Name='PropertyName'; Expression={Script Block}} or @{n='PropertyName'; e={Script Block}}.
AI suggested an example:
Get-ChildItem -File | Select-Object Name, @{n='SizeMB'; e={$_.Length / 1MB}}
demonstrating exactly what I desired to archive, then why does @{n;e} act strangely in Select-Object?
This is due to the fact of a (not so?) recent change in collapsing margins. In a block layout, the margin-bottom and margin-top of the heading elements collapses (only one margin is applicable), but in a flex layout, the margins are not collapsed. So, what you see in the flex layout is all the margins accounted for.
Try removing margin-top or margin-bottom for your needs. You can read more about margins here: https://www.joshwcomeau.com/css/rules-of-margin-collapse/ or at mdn: https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_box_model/Mastering_margin_collapsing
You could use process tracking instead
vrrp_instance Vtest_2 {
[...]
track_process {
check_haproxy
}
}
The tracking block would look like this
vrrp_track_process check_haproxy {
process haproxy
weight 10
}
This way you dont need a separate script running.
For those that are facing the same problem, please check this configuration under:
File > Project Structure > Project Settings > Module
Screenshot:
This is still happening 6.6 years later Microsoft SUCKS
You can use a Navigator widget or a named route to achieve this https://docs.flutter.dev/ui/navigation#using-the-navigator
If you intend displaying as a pop-up modal, then refer to the example on showDialog https://api.flutter.dev/flutter/material/showDialog.html#material.showDialog.1
As per your suspect yes i agree that aggregation query is quiet inefficient. I am not sure if you have supportive index to these?
Base collection from where you running aggregate query should have below index: { "seller_company_id": 1,'meetings.is_meeting_deleted':1, "is_deleted": 1,"is_hidden": 1 }
It is ok if you dont have 'meetings.is_meeting_deleted':1 in index as it is multikey.
And for the joining collection Company _id default index is sufficient.
Seems the CPU utilisation is pretty high 100% and as per the real time tab screenshot there seem lot of getMore running. I believe it is either Atlas search or the change stream. Can you help with what are the most common getMore command running?
with above info we can find some clue.
thanks, Darshan
use https://www.svgviewer.dev/ and insert the code of the xml and download it and import it (worked for me)
Please note that, In MongoDB, when you drop a collection or index, it may not be deleted immediately. Instead, it enters a "drop-pending" state, especially in replica sets, until it is safe to remove—after replication and commit across nodes. The "ObjectIsBusy" error means MongoDB cannot proceed because the object, such as an index, is still active or not fully deleted.
This status is internal and usually clears automatically once MongoDB finishes its background cleanup.
As you said it is fresh mongo that curious me if there was already older version of mongod ran and abandoned? If it is fresh you can clear the dbpath and try start them
Thanks, Darshan
!4 value filled in property worked for me
Figured out that you can get data out of the embedded report through the Javascript client for powerbi we were able to use this to get the users filter selections. Also we were able to add the users email address to the Row Level Security and implement it in the Reports to only show them content they were able to see.
The ErrorLevel is equal to -1 when the volume is crypted
Your can do the following command for exemple to uncrypte the volume :
manage-bde -status c: -p || manage-bde -unlock c: -RecoveryPassword "Key-Key..."
One useful resource is the AWS latency matrix from Cloud Ping.
You can use this website to answer these questions : https://www.dcode.fr/javascript-keycodes
Functions for component wise operations has a prefix cwise in Eigen. cwiseQuotient performs the component wise division.
Agree. It's not the same when you are exporting to CSV. Some columns need to be adjusted. In mi case, so many columns containing numerical Ids that most of the time start with ceros. Excel delete those ceros and make the column numeric and worse put the number in scientific notation.
@ tibortru answer works. I had a requirement where the requirement was to run scheduled spring batch job a lot more often in the test environments. I achieved this like so in the application-test.yml
batch
scheduler:
cron: 0 ${random.int[1,10]} 7-23 * * *
And referenced it like so:
@Scheduled(cron = "${batch.scheduler.cron}")
public void runJob()
Azure SQL supports synonyms for cross-database access but only on the same server.
"Four-part names for function base objects are not supported."
"You cannot reference a synonym that is located on a linked server."
I encountered this while trying to download a file using Lambda from S3.
For my scenario, I did the following steps:
Go to IAM -> Roles -> Search for Lambda's role (you can find it in Lambda -> Permissions -> Execution role -> Role name)
Click Add permissions -> Create inline policy
Choose a service -> S3
In the Filter Actions search bar look for GetObject -> Select the checkbox
In Resources click on Add ARNs in the "object" row
Add bucket name and the resource object name if needed - if not, select Any bucket name and/or Any object name. Click Add ARNs
Click Next -> Add a Policy name
Click Create policy
[
[29/10, 9:01 pm] Karam Ali Larik🙂↔️😔🥺💫: [email protected]
add
<uses-permission android:name="android.permission.WRITE_SETTINGS" />
in your androidmanifest.xml
The solution I've often employed in this type of scenario makes use of cfthread and some form of async polling.
Without a concrete example, I'll try and outline the basic mechanics...
User submits request.
A Unique ID is generated.
A <cfthread> begins, handling the long-running request and writing status update to a session variable scoped by the Unique ID.
The Unique Id is returned to the user, and they are directed to a page that uses JS to poll some endpoint that will read the session-scoped status information.
I've used XHR polling, and event-stream based solutions here - but the principle holds whichever technique you employ.
Encountered the same problem today and was pretty lost.
It seems it was due to mismatches in the package versions between the apps and the library.
In the end I ran `pnpm dlx syncpack fix-mismatches` (or run `pnpm dlx syncpack list-mismatches` first to see what changes will be applied) and the problem was solved.
linkStyle={{
backgroundColor: x + 1 !== page ? "#ffffffff" : "",
color: x + 1 !== page ? "#000000ff" : "",
borderColor: x + 1 !== page ? "#000000ff" : "",
}}
add this inline css or make custom css in index.css, it will resole the issue
Did you find a solution, brother? We all are in the same boat here.
I installed a php package that allows you within your composer.json file to configure a path that copies vendor assets to a public directory.
If you'd like to explore Deep Web, the link below is one of the best doorways to start your journey!
NOTE: ONLY FOR EDUCATIONAL PURPOSE!
I had my apt sources messed up my bad
Sorting the filenames before generating their arrays / hashes fixed it.
I know I'm late, but I just stumbled upon the same issue. I'm using OpenPDF-1.3.33, and by default cell's element is aligned to the left.
You need to make
p.setAlignment(Element.ALIGN_CENTER); // not Element.ALIGN_MIDDLE
Problem solved. I had a subsequent:
parameters.to_sql("Parameter", con=connection, if_exists='replace', index=False)
That replaces the TABLE during data import - not existing ROWS.....
Anyway, thanks for your feedback!
I encountered this exact issue and resolved it. The 10-second delay is caused by IPv6 connectivity being blocked in your Security Groups.
Service A: Fargate task with YARP reverse proxy + Service Connect
Service B: Fargate task with REST API
Configuration: Using Service Connect service discovery hostname
VPC: Dual-stack mode enabled
**Add IPv6 inbound rules to your Security Groups
This is root cause as per Claude AI:**
When using ECS Service Connect with Fargate in a dual-stack VPC:
Service Connect attempts IPv6 connection first (standard .NET/Java behavior per RFC 6724)
Security Group silently drops IPv6 packets (if IPv6 rules aren't configured)
TCP connection times out after exactly 10 seconds (default SYN timeout)
System falls back to IPv4, which succeeds immediately
For anyone looking at this more recently:
In scipy version 1.16, and presumably earlier, splines can be pickled and the code in the question works without error.
Probably you need to remove the current connection and add it again.
@jared_mamrot
Question: I want to calculate elasticities, so I am using elastic().
ep2 <- elastic(est,type="price",sd=FALSE)
ed <- elastic(est,type="demographics",sd=FALSE)
ei <- elastic(est,type="income",sd=FALSE)
ep2 and ed are working, but when type="income", running ei shows an error:
Error in dimnames(x) <- dn :
length of 'dimnames' [2] not equal to array extent
in AWS service connect. it will try to resolve first IPv6 and it failed then it will go IPv4.
In fail case it will hold 10s.
to resolve check security group and your allow IPv6 port
Thank you so much! it solved the problem after 2 entire days struggling. Here is minor tweak:
curl 'https://graph.facebook.com/v22.0/YOUR_PHONE_NUMBER_ID/register' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer YOUR_ACCESS_TOKEN' \
-d '{
"messaging_product": "whatsapp",
"pin": "000000"
}'
As said above, maps are not safe inside go routines like this.
The simplest way to modify your example is to add a mutex and perform lock/unlock on each pass.
package main
import (
"fmt"
"sync"
)
type Counter struct {
m map[string]int
mu sync.Mutex
}
func main() {
counter := Counter{m: make(map[string]int)}
var wg sync.WaitGroup
for i := 0; i < 100; i++ {
wg.Add(1)
go func(id int) {
defer wg.Done()
key := fmt.Sprintf("key%d", id)
counter.mu.Lock()
defer counter.mu.Unlock()
counter.m[key] = id * 10
}(i)
}
wg.Wait()
fmt.Println(counter.m)
}
This should help demonstrate the issue. But there are other ways to handle this, with read/write mutex or a channel. Note that using locks may slow down the program, compared to other methods.
More discussion: How safe are Golang maps for concurrent Read/Write operations?
This appears to be a recent issue with Anaconda, and on the Anaconda forum, this solution has worked:
Could you try updating conda-token by running:
conda install –-name base conda-token=0.6.1
I think you should use below style to access the nested variables in scripts
doc[“attributes”][“jewelry_v2”][“keyword”]
Or use Ctx style
ctx.attributes.jewelry_v2.keyword.value
look at this post to better understand character sets(unicode) and encoding(utf-8)
I downgraded DotVVM.AspNetCore from 5.0.0 to 4.3.9 and now it works fine.
Writing to the log file from multiple processes will work fine, until the maximum log file size is reached and the RotatingFileHandler attempts to rotate the log file. Rotating the log file is done by renaming the log file (appending .1 to the file name). This will not be possible because the file will be in use (locked) by other processes.
So the reason im getting forbidden is because despite my channel being partnered, i still have to request access to the API. Not entirely sure where, but thats why im getting 403.
bump - asking the same question
I just found out the issue.
Business-initiated calling is not available for U.S. phone numbers in the Meta (WhatsApp Business Cloud API).
That’s why the API returns:
(#138013) Business-initiated calling is not available
So if you're testing with a U.S. WhatsApp number (starting with +1), you won’t be able to send call permission requests or start business-initiated calls.
Switching to a phone number from a supported country resolves the issue.
I found the issue. in the orm.xml, I had to use
org.axonframework.eventhandling.AbstractDomainEventEntry
for the attribute config instead of
org.axonframework.eventhandling.AbstractSequencedDomainEventEntry
Just restating Jenkins fixed that for me.
It is a disassembly of try-catch blocks.
Please check this out: __unwind
@Tobias, Oracle explicitly permits custom information in the manofest
Modifying a Manifest File
You use the m command-line option to add custom information to the manifest during creation of a JAR file. This section describes the m option.
For me, the answer from @devzom actually worked. After a little research into it I found that by default, Astro uses static site generation (SSG) where pages are pre-rendered at build time. In default ('static') mode, routes with dynamic query parameters don't work properly because there's no server running to handle the requests at runtime.
By adding output: 'server', you're telling Astro to run in Server-Side Rendering (SSR) mode, which means there's an actual server processing requests, so dynamic routes with query parameters work correctly.
In the file astro.config.mjs
export default defineConfig({
output: 'server',
});
I then put my code up on DigitalOcean and had to install the following adapter:
npm install @astrojs/node
(or in my case, since I was using an older theme: npm install @astrojs/node@^8.3.4 )
Then your astro.config.mjs looks like:
import node from '@astrojs/node';
export default defineConfig({
output: 'server',
adapter: node({
mode: 'standalone',
}),
});
There are also hosting adapters for Vercel, Netlify, and Cloudflare
https://docs.astro.build/en/guides/on-demand-rendering/
As others have mentioned, setting output to 'hybrid' in astro.config.mjs will work, but you will need to add "export const prerender = false;" at the top of the page that you want to get the query parameters for, basically this is telling the file that the route in that file needs SSR.
All your routes will be served through that __fallback.func. What makes you think that they aren't being served? What happens when you access a route?
Installing binutils package in the Alpine Docker image solved this handshake issue for me.
I saw this solution in alpine/java image. Here's the link to its Dockerfile
The error is that jalali_date uses distutils, but in Python 3.12, distutils is removed and should be replaced with setuptools.
pip install setuptools
Thank you to everyone that replied. I think this is more than enough feedback to work with. I clearly still have a lot to learn. Also, apologies to @AlexeiLevenkov for not putting this in code review. I will look at using table lookups and animating palettes. For anyone else that stumbles on my thread I had a bug in my code where it was clipping because the seconds were looping back around. The fix was to create a frameCounter variable and increment it every time Invalidate() was called. Use that instead of the DateTime.Now.Second otherwise it will not be smooth.
There is actually a way to achieve this. In cases such that you are packaging multiple executable in a single package, this option will be helpful. I've verified this with pyinstaller==6.15.0
exe = EXE(....,
....,
contents_directory='_new_internal'
)
Planner Premium uses Project schedule APIs. The official documentation is here:
But I'm not familiar with that APIs.