Plotly.d3 was removed in v2.0.0 of plotly.js. You'll need to import D3 separately going forward.
Detecting in-app browsers is a tricky business. The package detect-inapp use to be the go-to but it has sadly not been maintained in a long while.
I forked and refactored it into inapp-spy and continue to maintain it for anyone who is still looking for a way to detect in-app browsers in 2025!
Did this solve your issue?
In your main.dart file:
localizationsDelegates: const [
S.delegate,
GlobalMaterialLocalizations.delegate,
GlobalWidgetsLocalizations.delegate,
GlobalCupertinoLocalizations.delegate,
FlutterQuillLocalizations.delegate,
],
locale: const Locale('en', 'PH'), // Set locale to Philippines; replace with your locale
supportedLocales: S.delegate.supportedLocales,
Note: You can remove the local configuration if you don't want to implement it across your entire app, and apply it per widget instead, just like what you did in your second code snippet.
I've already faced the same issue before, that configuration helped me.
I had a similar issue and ended up using a tool called Converly. It's like Zapier but for conversion tracking in tools like Google Analytics, Google Ads, etc. You basically select a trigger (Avada form submitted) and then the actions (Conversion in Google Analytics, Conversion in Google Ads, etc). Worked perfectly for us
I'm getting the same error!
If I ask for "openid email profile accounting.transactions accounting.settings offline_access" it works.
...but if I add accounting.contacts to that list, I get sent to this page:
**Sorry, something went wrong**
Go back and try again.
If the issue continues, please visit our Status page
Error: unauthorized_client
Invalid scope for client
Error code: 500
I've tried in multiple browsers on multiple computers and always get the same behaviour. Asking for "accounting.contacts" breaks it.
What's strange is we have 2 xero apps, a test one and a production one. The test one lets me connect with the accounting.contacts scope, but the production one does not.
Did you ever find a solution to the problem?
A simple way is to use the Key-Generator’s online tool designed specifically for generating secret keys for Next.js Auth — very convenient:
https://key-generator.com/next-js-auth-secret-generator
In my Facebook account notification came tomorrow from Facebook Tht you are temporarily restricted for uploading photos til today at 8 PM
Did you ever solve this?
Here are some more characters because you need at least 30...
Seems like this will do it (didn't test it yet myself).
https://plugins.jenkins.io/multibranch-job-tear-down/
Your from parameter must be `[title] <yourmail.yourdomain.com>`
Example:
from: `Verify Email <[email protected]>`
For anyone who needs this:
I have vim-rails working under TJ Devries Kickstart project ( see link). It was as simple as adding one line of code to init.lua. https://github.com/nvim-lua/kickstart.nvim?tab=readme-ov-file
Here is a link to his YT video explaining the single kickstartt file https://www.youtube.com/watch?v=m8C0Cq9Uv9o
The URL of the Maven repo uses HTTPS protocol, not HTTP.
Run mvn help:effective-pom and inspect the result for definitions.
They should be at least:
<pluginRepositories>
<pluginRepository>
<releases>
<updatePolicy>never</updatePolicy>
</releases>
<snapshots>
<enabled>false</enabled>
</snapshots>
<id>central</id>
<name>Central Repository</name>
<url>https://repo.maven.apache.org/maven2</url>
</pluginRepository>
</pluginRepositories>
Consider mirroring of the default Central Maven repository (the last resort to look in) with a statement in your settings.xml like:
<mirror>
<id>public-local</id>
<mirrorOf>central</mirrorOf>
<name>Let the default Maven Central repository is resolved in the local Nexus' public repository</name>
<url>http://localhost:8081/repository/maven-public/</url>
</mirror>
From the log above I see that you use a local Archiva, so change the URL accordingly, but still Archiva does not find the needed plugins - maybe it is not defined as a proxy of Maven Central?
import os
import re
import urllib.request
import youtube_dl
import shutil
import discord
from discord.ext import commands
from discord.utils import get
bot = commands.Bot(command_prefix='g.')
token = '<mytoken>'
queues = {} # for tracking queued song numbers
# -----------------------------
# Function to check and play next song
# -----------------------------
def check_queue(ctx, voice):
Queue_infile = os.path.isdir("./Queue")
if Queue_infile:
DIR = os.path.abspath(os.path.realpath("Queue"))
length = len(os.listdir(DIR))
if length > 0:
first_file = os.listdir(DIR)[0]
song_path = os.path.abspath(os.path.realpath("Queue") + "\\" + first_file)
if os.path.isfile("song.mp3"):
os.remove("song.mp3")
shutil.move(song_path, "./song.mp3")
voice.play(
discord.FFmpegPCMAudio("song.mp3"),
after=lambda e: check_queue(ctx, voice)
)
voice.source = discord.PCMVolumeTransformer(voice.source)
voice.source.volume = 0.3
print("Playing next queued song...")
else:
queues.clear()
print("Queue empty, stopping.")
else:
queues.clear()
print("No Queue folder found.")
# -----------------------------
# PLAY command
# -----------------------------
@bot.command(pass_context=True)
async def play(ctx, *args: str):
search = '+'.join(args)
if search.strip() == "":
await ctx.send("Uso: g.play (Video)")
return
html = urllib.request.urlopen("https://www.youtube.com/results?search_query=" + search)
video_ids = re.findall(r"watch\?v=(\S{11})", html.read().decode())
url = "https://www.youtube.com/watch?v=" + video_ids[0]
print("Found URL:", url)
# remove old song if exists
if os.path.isfile("song.mp3"):
try:
os.remove("song.mp3")
queues.clear()
print("Removed old song file")
except PermissionError:
await ctx.send("Error: Ya estoy poniendo musica! Usa g.queue para agregar más canciones.")
return
# clear old queue folder
if os.path.isdir("./Queue"):
try:
shutil.rmtree("./Queue")
print("Removed old Queue Folder")
except:
pass
await ctx.send("Preparando cancion...")
voice = get(bot.voice_clients, guild=ctx.guild)
if not voice:
if ctx.author.voice:
channel = ctx.message.author.voice.channel
voice = await channel.connect()
else:
await ctx.send("⚠️ No estas en un canal de voz.")
return
ydl_opts = {
'format': 'bestaudio/best',
'quiet': True,
'postprocessors': [{
'key': 'FFmpegExtractAudio',
'preferredcodec': 'mp3',
'preferredquality': '192',
}],
}
with youtube_dl.YoutubeDL(ydl_opts) as ydl:
print("Downloading audio now\n")
ydl.download([url])
for file in os.listdir("./"):
if file.endswith(".mp3"):
name = file
os.rename(file, "song.mp3")
voice.play(
discord.FFmpegPCMAudio("song.mp3"),
after=lambda e: check_queue(ctx, voice)
)
voice.source = discord.PCMVolumeTransformer(voice.source)
voice.source.volume = 0.3
await ctx.send(f"▶️ Reproduciendo: {url}")
# -----------------------------
# QUEUE command
# -----------------------------
@bot.command(pass_context=True)
async def queue(ctx, *searchs):
search = '+'.join(searchs)
if not os.path.isdir("./Queue"):
os.mkdir("Queue")
DIR = os.path.abspath(os.path.realpath("Queue"))
q_num = len(os.listdir(DIR)) + 1
while q_num in queues:
q_num += 1
queues[q_num] = q_num
queue_path = os.path.abspath(os.path.realpath("Queue") + f"\\song{q_num}.%(ext)s")
ydl_opts = {
'format': 'bestaudio/best',
'quiet': True,
'outtmpl': queue_path,
'postprocessors': [{
'key': 'FFmpegExtractAudio',
'preferredcodec': 'mp3',
'preferredquality': '192'
}],
}
with youtube_dl.YoutubeDL(ydl_opts) as ydl:
html = urllib.request.urlopen("https://www.youtube.com/results?search_query=" + search)
video_ids = re.findall(r"watch\?v=(\S{11})", html.read().decode())
url = "https://www.youtube.com/watch?v=" + video_ids[0]
print("Queueing:", url)
ydl.download([url])
await ctx.send("➕ Añadiendo canción " + str(q_num) + " a la lista!")
# -----------------------------
# SKIP command
# -----------------------------
@bot.command()
async def skip(ctx):
voice = get(bot.voice_clients, guild=ctx.guild)
if voice and voice.is_playing():
voice.stop() # triggers check_queue through after callback
await ctx.send("⏭️ Canción saltada.")
else:
await ctx.send("⚠️ No hay canción reproduciéndose.")
# -----------------------------
# On Ready
# -----------------------------
@bot.event
async def on_ready():
print(f"Bot {bot.user} has connected to discord!")
bot.run(token)
This code works without any errors, the key point was adding the correct columns (1:5) in point_accuracy_measures in the accuracy function.
fit %>%
fabletools::forecast(h = 1) %>%
fabletools::accuracy(Time_Series_test, measures = point_accuracy_measures[1:5]) %>%
dplyr::select(.model:MAPE) %>%
dplyr::arrange(RMSE)
You should be able to do it by the following:
set the registryUrls in your renovate.json's packageRules, as explained here: https://docs.renovatebot.com/configuration-options/#registryurls. Also see this for finding out the repo URL for CodeArtifact: https://docs.aws.amazon.com/codeartifact/latest/ug/npm-auth.html#configuring-npm-without-using-the-login-command
keep the aws codeartifact login inside the pipeline
copy the renovate.json file from the repo directory to a temporary location
embed the .npmrc inside the temporary renovate.json, perhaps using a tool like jq
set the RENOVATE_CONFIG_FILE environment variable to point to the temporary renovate.json
sql_tmpl = "delete from Data where id_data in (SELECT UNNEST(:iddata))"
params = {
'iddata':[1, 2, 3, 4],
}
self.session.execute(text(sql_tmpl), params)
I needed to upgrade node
This helped:
npm update node
tienes que tener instalador el firebird, para esa version creo que es la 2.5 Firebird 2.5
tambien el ApacheFriends XAMPP Version 5.6.40
pero checa que el puerto del server firebird sea esa.
I managed to find a solution for this. If someone knows a better solution, please let me know.
public File aggregateAllBenchmarks() {
// Load benchmark configuration
PlannerBenchmarkConfig benchmarkConfig = PlannerBenchmarkConfig.createFromXmlResource("benchmarkConfig.xml");
File benchmarkDirectory = new File(String.valueOf(benchmarkConfig.getBenchmarkDirectory()));
if (!benchmarkDirectory.exists() || !benchmarkDirectory.isDirectory()) {
throw new IllegalArgumentException("Benchmark directory does not exist: " + benchmarkDirectory);
}
// Read all existing benchmark results from the directory
BenchmarkResultIO benchmarkResultIO = new BenchmarkResultIO();
List<PlannerBenchmarkResult> plannerBenchmarkResults =
benchmarkResultIO.readPlannerBenchmarkResultList(benchmarkDirectory);
if (plannerBenchmarkResults.isEmpty()) {
throw new IllegalArgumentException("No benchmark results found in directory: " + benchmarkDirectory);
}
// Collect all single benchmark results and preserve solver names
List<SingleBenchmarkResult> allSingleBenchmarkResults = new ArrayList<>();
Map<SolverBenchmarkResult, String> solverBenchmarkResultNameMap = new HashMap<>();
for (PlannerBenchmarkResult plannerResult : plannerBenchmarkResults) {
for (SolverBenchmarkResult solverResult : plannerResult.getSolverBenchmarkResultList()) {
allSingleBenchmarkResults.addAll(solverResult.getSingleBenchmarkResultList());
solverBenchmarkResultNameMap.put(solverResult, solverResult.getName());
}
}
// Create and configure the benchmark aggregator
BenchmarkAggregator aggregator = new BenchmarkAggregator();
aggregator.setBenchmarkDirectory(benchmarkDirectory);
aggregator.setBenchmarkReportConfig(benchmarkConfig.getBenchmarkReportConfig());
// Perform the aggregation - returns HTML report file
File htmlOverviewFile = aggregator.aggregate(allSingleBenchmarkResults, solverBenchmarkResultNameMap);
return htmlOverviewFile;
}
https://github.com/dohooo/supazod
This project was created for this purpose.
Okay I finally got this working after following the first answer by @vangj
The issue is some default scopes like 'openid' do not return an accessToken. I found this out by implementing a manual SSO function and some googling. (this is the only one I had defined)
You need to create a custom scope and expose your API for the 'accessToken' to return within the microsoft interceptor config.
So make sure your protected resource scopes has a scope included that will indeed return an accessToken. (openid will not)
export function MSALInterceptorConfigFactory(): MsalInterceptorConfiguration {
const protectedResourceMap = new Map<string, Array<string>>();
protectedResourceMap.set ( // This triggers automatic silentSSO
environment.apiUrl, //your app will try to get a token if protected resource gets called
['openid', 'https://yourapi.onmicrosoft.com/{clientid}/App.Access'] // You need a custom application scope and expose an API, in order to get 'accessToken' to return, if not it will be empty and fail.
);
return {
interactionType: InteractionType.Redirect,
protectedResourceMap,
};
}
class MetroRailway:
def __init__(self):
# Initialize the MetroRailway with an empty list of stations
# Using list instead of queue to allow easier insertions/deletions
self.stations = []
# Method to add a new station
# Special handling is done when adding the 4th station
def addnewstation(self, station):
if len(self.stations) < 3:
# Directly add the station if fewer than 3 stations exist
self.stations.append(station)
print(f"Station '{station}' added successfully.")
elif len(self.stations) == 3:
# When adding the 4th station, user chooses position
print("Insert at --")
print("[1] First Station")
print("[2] Middle Station")
print("[3] Last Station")
choice = input("Choose Position: ")
# Insert based on user choice
if choice == "1":
self.stations.insert(0, station) # Insert at the beginning
print(f"Station '{station}' inserted as First Station.")
elif choice == "2":
self.stations.insert(2, station) # Insert at the middle
print(f"Station '{station}' inserted as Middle Station.")
elif choice == "3":
self.stations.append(station) # Insert at the end
print(f"Station '{station}' inserted as Last Station.")
else:
# Invalid choice defaults to adding at the end
print("Invalid choice. Added to the Last Station by default.")
self.stations.append(station)
# Display the updated first 3 stations
print("\nUpdated First 3 Stations:")
for i, s in enumerate(self.stations[:3], 1):
print(f"[{i}] {s}")
# Ask user for preferred position (just an additional prompt)
pos_choice = input("Enter position: ")
print(f"You selected position {pos_choice}.")
else:
# If more than 4 stations, simply append at the end
self.stations.append(station)
print(f"Station '{station}' added successfully.")
# Method to print all stations in the list
def printallstations(self):
if not self.stations:
print("No stations available.") # Handle empty station list
else:
print("Stations List:")
# Print each station with numbering
for i, station in enumerate(self.stations, 1):
print(f"{i}. {station}")
# Method to delete a specific station by name
def deletestation(self):
if not self.stations:
print("No stations to delete!") # Handle case when list is empty
return
station = input("Enter station name to delete: ")
if station in self.stations:
self.stations.remove(station) # Remove the station if it exists
print(f"Station '{station}' deleted successfully.")
else:
# Notify user if station is not found
print(f"Station '{station}' not found in the list!")
# Method to get the distance between two stations
# Distance is calculated as the difference in their list indices
def getdistance(self, station1, station2):
if station1 in self.stations and station2 in self.stations:
index1 = self.stations.index(station1)
index2 = self.stations.index(station2)
distance = abs(index1 - index2) # Absolute difference
print(f"Distance between {station1} and {station2} is {distance} station(s).")
else:
# If one or both stations are not in the list
print("One or both stations not found!")
# Main function to run the MetroRailway program
def main():
metro = MetroRailway() # Create a MetroRailway instance
# Display welcome message and available actions
print("\nWelcome to LRT1 MetroRailway")
print("Available Actions:")
print(" help - Show this menu")
print(" printallstations - Show all stations")
print(" addnewstation - Add a new station (special choice on 4th input)")
print(" deletestation - Delete a specific station by name")
print(" getdistance - Measure distance between two stations")
print(" stop - Stop current run")
print(" exit - Exit the program completely")
# Infinite loop to keep program running until user chooses to stop/exit
while True:
action = input("\nChoose Action: ").lower()
if action == "help":
# Show help menu again
print(" help - Show this menu")
print(" printallstations - Show all stations")
print(" addnewstation - Add a new station (special choice on 4th input)")
print(" deletestation - Delete a specific station by name")
print(" getdistance - Measure distance between two stations")
print(" stop - Stop current run")
print(" exit - Exit the program completely")
elif action == "printallstations":
# Show all stations
metro.printallstations()
elif action == "addnewstation":
# Prompt for station name then add
station = input("Enter station name: ")
metro.addnewstation(station)
elif action == "deletestation":
# Delete station
metro.deletestation()
elif action == "getdistance":
# Prompt user for two stations and calculate distance
station1 = input("Enter first station: ")
station2 = input("Enter second station: ")
metro.getdistance(station1, station2)
elif action == "stop":
# Stop current run but not exit the whole program
print("Stopping current run...")
break
elif action == "exit":
# Exit the program completely
print("Exiting program completely. Goodbye!")
exit()
else:
# Handle invalid actions
print("Invalid action. Type 'help' to see available actions.")
# Run the program only if file is executed directly
if __name__ == "__main__":
main()
use imazing to extract it then change it to .ipa and I don't know how to change it to swift
The difference is just that queue is ADT and buffer is data structure.
obj = MyClass()
fds=['a','b','c']
for i in fds:
attribute_name = f"{i}"
setattr(obj, attribute_name, [f"{i}"])
print(obj.i)
I dont know if anyone will look at this since this post is old, but I am using VS22 and I cannot access the designer files for some reason. I made an aspx vb web project as an empty project and tried making a web forms project but I'm not seeing the designer files. What should I do? I think they are there becuase I tried renaming another file to what the designer file should be and it wouldnt let me.
You can specify the mysql to get start on another port.
Go to xampp>phpMyAdmin Directory.
Find the config.inc.php file.
Now change this line:
$cfg['Servers'][$i]['host'] = '127.0.0.1';
To
$cfg['Servers'][$i]['host'] = '127.0.0.1:3307';
open dev tool shortcut is F12 in which go to netwok tab and At the top, find the Throttling dropdown , make it Offline to No throttling
In the Transformer architecture, the weight matrices used to generate the Query (Q), Key (K), and Value (V) vectors do not change with each individual input value or token during inference. These weight matrices are learned parameters of the model, optimized during the training phase through back-propagation. Once training is complete, they remain fixed during the forward pass (inference) for all inputs.
Just change the projection to "Orthographic"
Use
"31.7%".AsSpan(..^1)
// e.g.
Decimal.TryParse("31.7%".AsSpan(..^1), System.Globalization.NumberStyles.Any, null, out var dec)
Android loads fine.
iOS fails with No ad to show for real ad units (but test units work fine).
Exception: Banner with getCurrentOrientationAnchoredAdaptiveBannerAdSize works on iOS.
Using google_mobile_ads: ^6.0.0 with Flutter 3.32.5.
Apps are already live.
This points to an ad-serving / configuration issue on iOS, not a code bug.
Here’s a checklist of things to verify:
Make sure the iOS ad unit is created specifically as a 300x250 (Medium Rectangle), not banner or other type.
Verify that you’re using the correct iOS ad unit ID (not accidentally Android’s).
In your AdMob console → App settings, ensure your iOS app is linked to the correct App Store listing.
Sometimes if it’s not linked or newly published, Google may not serve real ads yet.
The error Request Error: No ad to show means AdMob has no fill for that size / placement on iOS (not your code).
For medium rectangle (300x250), fill is often lower than banners, especially on iOS.
Adaptive banners usually get higher fill, which explains why that works.
Try running the app for a few days — sometimes new iOS ad units take 24–48h before real ads serve.
Make sure you’ve added in ios/Runner/Info.plist:
<key>GADApplicationIdentifier</key>
<string>ca-app-pub-xxxxxxxxxxxxxxxx~xxxxxxxxxx</string>
<key>SKAdNetworkItems</key>
<array>
<dict>
<key>SKAdNetworkIdentifier</key>
<string>cstr6suwn9.skadnetwork</string>
</dict>
<!-- Add all AdMob/AdManager SKAdNetworks -->
</array>
⚠️ Missing SKAdNetwork IDs can cause no ads on iOS.
Sometimes certain placements don’t serve due to limited demand in your region. Try a VPN or another country to test fill.
If you want higher fill for 300x250 on iOS, set up AdMob mediation with another network (like Facebook Audience Network or Unity Ads).
Summary:
Your code is fine (since test ads + banners work).
This is almost certainly no fill from AdMob for that format on iOS.
Double-check ad unit setup, Info.plist SKAdNetwork entries, and App Store linking.
If all is correct, wait a bit or consider using adaptive banners / mediation to increase fill.
Would you like me to give you the exact Info.plist SKAdNetwork IDs list you should include for AdMob iOS so you can confirm if that’s the missing piece?
I had this problem as well. This seems like the most elegant solution considering VBA's limitation with not being able to use wdCell or wdRow with the MoveDown method. Works well for me!
If Selection.Information(wdWithInTable) Then Selection.SelectCell
Selection.MoveDown
Did you check the .env on your Frontent? or the API_URL. Did you check all the environment variables, both in Railway and in your code?
mtl file better
Niether FBX Review or Autodesk 3d viewer support vertex colors.
Do any?
I'm having the same issue with their sandbox. I'm requesting access to the production environment so I can continue development, as I've already given up on their sandbox.
If you have any updates, please post here.
Using the levels parameter to reduce the number of levels also worked for me. It produces cruder contour plots of course. Something like levels=5.
Cloud HTTPS Configuration
In the Cloudflare tunnel, I set "https://moodle.mydomain.com" to "HTTP://localhost."
In moodle/config.php, make the following changes:
$CFG->wwwroot = 'https://moodle.domain.com';
$CFG->sslproxy = true;
On iOS you won’t get the raw apns.payload back from activeNotifications. iOS only exposes the aps fields that are mapped into the notification. If you need custom data, put it under the data key in your FCM message and read it from there, or include it in the local show() call when displaying the notification on the device.
I had also tried all that what you have done but nothing worked. The way i resolved it is by uninstalling NetBeans from program files, NetBeans AppData and all what NetBeans had in my system. Do not forget to check all the checkboxes on the NetBeans uninstaller. After that install it again. This won't delete your created projects and its files.
I'm experiencing the same error. I'm also using a demo account with the XM broker. I don't know if the problem is because the account is a demo.
I implemented it in both Python and Node, but the same problem. Sometimes it works, sometimes it doesn't. When the timeout occurs, it stays for a while and then goes away. But it always comes back.
I've already sent a message to MetaAPI support, but haven't received a response.
I'm trying to see what can be done because I need a way to control my MetaTrader account with Python/Node through a Linux operating system. It's really annoying that I can't use the official MetaTrader library in Python for Linux, only Windows.
From the ImageMagick docs:
Use -define trim:edges={north,east,south,west} separated by commas to only trim the specified edges of the image
So you can trim only the bottom border with
magick pre-trim.png -define trim:edges=south -trim post-trim.png
I'm getting this and feels to me like a Microsoft bug
This is a case of performing three analyses, and looking at partial mitigations.
(1) You have a vulnerability in the package, but does it constitute a vulnerability in the system?
Versions of Log4J have vulnerabilities when used in specific ways. Do you know whether the vulnerability is detected by simply seeing the version has a CVE, or is the vulnerability exploitable in your use case? Has a penetration test been done that validates the Log4J vulnerability causes SharePoint 2013 to be vulnerable in some way?
(2) If you have a vulnerability in the system, does it constitute a risk?
This is a function of the threat that may exist. If the server is only accessible to internal users then you want to consider at least these two questions:
- Do you have an insider threat which you need to protect against?
- Could the system vulnerability realistically be exploited by an external attacker using a XSS vulnerability or other network-based attack?
(3) What is the value at risk from the vulnerability compared to the value at risk from the functionality supported?
Let's say that you quantify a potential loss (direct losses, reputation, penalties) of a million dollars from the vulnerability being exploited. If the value of the functionality exceeds this, then economically you should retain the service, potentially with mitigations in place.
Mitigations
It may be that the vulnerability can be mitigated by disabling specific features of Log4J. For example, if you do not require JNDI support (quite likely you do not) then you can delete specific classes from the .JAR file and not break the service, but prevent JNDI-based attacks.
Alternatively, can you put a WAF on the server to filter attacks?
I also faced with similar issue and found this answer on Github
https://github.com/prisma/prisma/discussions/8925#discussioncomment-1233481
and solved my problem. Basically I stopped postgres service locally in my PC and used docker container instead
null checkif (obj === 'null') { return null;} // null unchanged
Here you’re comparing against the string "null", not the actual null.
So if you pass null, it will be treated as an object and go into the object-handling block.
✅ Fix:
if (obj === null) { return "null"; } // real null
Capacitor doesn’t support 16 KB page size yet. You can just ignore this message for now. Deadline for implementation of this feature is November 2025 and it can be extended by developer to May 2026, so nothing to worry about yet.
Change the package name of the project and it will fix.
if you are using like com.example.wallpaper then change it to com.raghav.estrowallpaper. it will fix 100%.
I checked your code and even sampled it in the sandbox to see if you have any problems, but you should also tell me the conditions you are using, which ones have Zindex and... in what form.
I am waiting for your answer so that I can guide you completely and correctly.
example in sandbox:
https://codesandbox.io/p/sandbox/238mz2
protected override void WndProc(ref Message m)
{
const int WM_NCLBUTTONDOWN = 0xA1;
const int HTCAPTION = 0x2;
// Chặn kéo form bằng title bar
if (m.Msg == WM_NCLBUTTONDOWN && (int)m.WParam == HTCAPTION)
{
return; // không xử lý message => không di chuyển
}
base.WndProc(ref m);
}
block form drag => form can not resize
After doing some minor research and configuration review. I noticed that since I've recently deployed a container to my local Docker engine, the project targets.file pointed to a later updated version of some of the packages. These packages were in the 7.0.x ranges. The packages my project was referencing was 5.0.xx which means my project is behind several versions.
In reviewing both configurations and the versions I simply updated the same packages to the latest updated version 7.0.x and performed a "Clean", then "Build" and was able to successfully build the project. I'm hoping this will help anyone else who runs into this error.
You can test the fps by opening devtools in vs code, then show fps
Toggle Developer Tools (ctrl + shift + p from vs code) > Show FPS meter (ctrl + shift + p again from devtools)
Problem was solved by myself. I have fallback to /[lang] route if the page is not found. but at localhost my chrome dev tools during loading of /[lang]/[category] page tried to get the page \.well-known\appspecific\com.chrome.devtools.json and it was returning 404 and console always was showing double loading of /[lang] route.
“The problem is with the loader. On the website, the loader takes too long to disappear. I want it to be removed within just 2 seconds so the content loads immediately.”
Make sure you don't have a lower ndkVersion at build.gradle compared to the one you have set on the SDK Tools
Some people apparently has already mentioned Toga (see https://toga.beeware.org), the GUI framework, however I'd like to go a bit more in-depth into how all this stuff works. Also, just to add, Briefcase (https://briefcase.beeware.org) is the packaging tool that turns a Python codebase into an Xcode project, installing all dependencies in the process.
So... how is Python running on iOS in the first place? Well, Python is written in C and uses autoconf, so it's possible to compile it for iOS with a bunch of additional handling -- with things like platform identification and more annoyingly, packaging Python properly so it could be distributed through the app store. This includes using Apple's "framework" directory structure and not installing any platform-specific tools, and making each binary extension module a separate "framework" in Apple's sense. And of course, the whole lot of C APIs that Apple has helpfully disabled. References: https://docs.python.org/3/using/ios.html and the whole thread of PRs is at https://github.com/python/cpython/issues/114099 -- this part of functionality is made possible with Python 3.13 officially, and has been maintained in https://github.com/beeware/Python-Apple-support/ and -patched branches of https://github.com/freakboy3742/cpython/ before. CPython still lacks some infrastructure for iOS builds and has not added tvOS/watchOS/visionOS yet so these two repositories serve these purposes as more things gets integrated into official CPython.
Now you would have Python running on iOS, but you'd need a way to access objc APIs. Rubicon Obj-C (https://rubicon.beeware.org/) is a shim that does exactly that, created by the BeeWare project as well. Since the Objective-C runtime is all C underneath it, and Python provides ctypes to call C functions, Rubicon Objective-C thus works by calling the appropriate C functions to send the appropriate messages using the Objective-C runtime, thus being able to access all native Objective-C APIs.
To start this Python code that does all the interesting stuff, however, the Python C API is used. The template that Briefcase uses to roll out Xcode projects with Python code is found at https://github.com/beeware/briefcase-iOS-Xcode-template and consists of a bulky part of initialization as seen in the main Objective-C file.
Toga's iOS backend uses Rubicon Objective-C to access the native Objective-C classes to make a UIApplication, UIWindow, UIViews, and all the native controls.
A really good conference talk by Dr. Russell Keith-Magee, founder of Toga, explains how this all works not only on iOS as I described above but also on Android.
I'm putting this info here just in case you're curious or if you'd like to try to embed Python and access the raw Objective-C APIs by yourself. Toga, however, is getting more mature almost every day, and besides simply wrapping the iOS APIs as described above, it is significantly more Pythonic and it abstracts away all the small cross-platform differences of apps on multiple platforms, so if you write a Toga app you can also deploy it on Windows (Winforms), Linux (GTK+), macOS (Cocoa), Android -- although some functionality may not be fully available.
DISCLOSURE
I have contributed to (but am not affiliated with) this project and therefore may be biased, however I have ensured that all my statements above are truthful in terms of the current state of the project. I am NOT AFFILIATED with the BeeWare project; just trying to spread some news of it.
EDIT and if you want to interact with the Python runtime in Swift you may utilize PythonKit: https://github.com/pvieito/PythonKit -- this is not related to BeeWare nor official CPython which were projects I have been talking about earlier.
I have also encountered a situation with the keylock infinite loop, but in the end, I found that my problem was a version issue between the keylock service and the keyleak.js adapter, and I wanted to use it in an internal network environment, so HTTPS was also an issue. After downgrading both to version 21.1.5, it was resolved
As @aliaksandr-s pointed out:
No, what you are trying to do is impossible. Browsers intentionally do not allow web pages to change print dialog settings. If what you want is a PDF then use a library to generate it on the Server/Client side.
He is correct, it's not possible to change Chrome's default print margin setting using JavaScript or CSS.
The best you can do is try to minimize margins:
@media print {
@page { margin: 0; }
html, body { margin: 0; padding: 0; }
}
But, if you did this server-side, you could set zero margins in the PDF. A example would be:
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://examplecom', { waitUntil: 'networkidle0' });
await page.pdf({
path: 'output.pdf',
margin: { top: 0, right: 0, bottom: 0, left: 0 },
printBackground: true,
format: 'A4'
});
await browser.close();
})();
I had the same Issue, but explicit cast didnt resolve it. I need to change the data type from unsigned char* to uint8_t.
These files are used in part for showing the git information above functions and classes. They are revisions of those files in order to show the revision breakdown when you hover over the revisions part.
You can disable this behavior in the options by disabling CodeLens under the Text Editor options.
If you are worried about the size these files occupy on disk you can set the folder as a symlink to another drive to store them somewhere else. Alternatively using scheduled tasks and powershell removing the contents of the folder each system start would also work, though it does mean it has to generate them again when you open a file.
I just published a package for this purpose:
https://www.npmjs.com/package/express-switchware
Usage:
const stripeHandler: express.RequestHandler = (req, res) =>
res.send("Processed payment with Stripe");
const paypalHandler: express.RequestHandler = (req, res) =>
res.send("Processed payment with PayPal");
app.post(
"/checkout",
expressSwitchware(
(req) => req.body.provider, // e.g. { "provider": "stripe" }
{
stripe: stripeHandler,
paypal: paypalHandler,
}
)
);
I think it is simple and elegant.
If I got your question well you just need sequence of the gene. Is it possible to try another parser for this purpoe, i.e. SeqIO.parse(gb_file, "genbank") and then in a cycle for record in SeqIO.parse(gb_file, "genbank"): iterates through the records. Then use another cycle as follows:
from Bio import SeqIO
from Bio.SeqFeature import FeatureLocation
def extract_gene_name_seq(gb_file):
gene_name_seq = []
try:
for record in SeqIO.parse(gb_file, "genbank"):
# The SeqIO.parse() function takes two main arguments:
# A file handle (or filename) to read the data from.
# The format of the sequence file.
# It returns an iterator that yields SeqRecord objects, one for each sequence in the file.
for feature in record.features:
if feature.type == "CDS":
try:
name = feature.qualifiers.get('gene', [''])[0]
location = feature.location
if isinstance(location, FeatureLocation):
start = location.start
end = location.end
gene_sequence = location.extract(record.seq)
strand = location.strand
gene_name_seq.append((name, strand, start, end, gene_sequence))
else:
print(f"Skipping feature with non-standard location: {feature}")
except (KeyError, AttributeError) as e:
print(f"Error processing feature: {feature}. Error: {e}")
continue
except Exception as e:
print(f"Error parsing GenBank file: {e}")
return []
return gene_name_seq
# Example usage:
gb_file = "Capsicum_annuum.gb" # Replace with your GenBank file name
gene_name_seq_data = extract_gene_name_seq(gb_file)
print('Length of gene_name_seq_data', len(gene_name_seq_data))
if gene_name_seq_data:
for name, strand, start, end, gene_sequence in gene_name_seq_data:
print(f"Gene: {name}, Strand {strand}, Start {start}, End {end}, Gene_sequence: {gene_sequence}")
else:
print("No gene information found or an error occurred.")
I've also stumbled upon this. It seems that in Hibernate 6.6 Expression.as(String.class) no longer generates an SQL CAST. It is now just an unsafe Java type cast. To use LIKE on a numeric column you must explicitly cast via JpaExpression#cast
It's part of the Migration Guide
ARM PMULL or PMULL2 would be the equivalent.
Okay, the problem is funny: I used adb exec-out run as ... cat to copy the files from my Pixel to my Windows. First it works fine, now not anymore. It includes bom. But on the device the files works fine. Thanks to all for helping. awkward :)
I'm not a professional but here's what I've got.
import cv2
import numpy as np
test1=np.zeros((90,90,3))
test2=np.zeros((90,90,3))
img=cv2.imread('colors.png')
img_hsv=cv2.cvtColor(img,cv2.COLOR_BGR2HSV)
light_color=np.array([[[245,183,22]]],dtype="uint8")
test1[0:45,0:90]=cv2.cvtColor(light_color,cv2.COLOR_RGB2BGR)
# [22,232,245]
light_color=cv2.cvtColor(light_color,cv2.COLOR_RGB2HSV)
test1[45:90,0:90]=light_color
light_color=np.ravel(light_color)
dark_color=np.array([[[255,193,32]]],dtype="uint8")
test2[0:45,0:90]=cv2.cvtColor(dark_color,cv2.COLOR_RGB2BGR)
# [22,223,255]
dark_color=cv2.cvtColor(dark_color,cv2.COLOR_RGB2HSV)
test2[45:90,0:90]=dark_color
dark_color=np.ravel(dark_color)
#light_color = np.array([22,232,245])
#dark_color = np.array([32,218,255])
lc_sat=light_color[1]
light_color[1]=dark_color[1]
dark_color[1]=lc_sat
mask=cv2.inRange(img_hsv,light_color,dark_color)
#mask=cv2.inRange(img,np.array([22,232,245],np.uint8),np.array([22,223,255],np.uint8))
mask2=cv2.inRange(img,np.array([22,183,245],np.uint8),np.array([32,193,255],np.uint8))
result=cv2.bitwise_and(img_hsv,img_hsv,mask=mask)
result2=cv2.bitwise_and(img,img,mask=mask2)
cv2.imwrite("hsv_mask1_hsv.png",mask)
cv2.imwrite("hsv_mask2_bgr.png",mask2)
cv2.imwrite("hsv_result1_hsv.png",cv2.cvtColor(result,cv2.COLOR_HSV2BGR))
cv2.imwrite("hsv_result2_bgr.png",result2)
cv2.imwrite("hsv_lightc.png",test1)
cv2.imwrite("hsv_darkc.png",test2)
cv2.imwrite("hsv_colors.png",img_hsv)
enter image description here enter image description here
enter image description here. enter image description here
Here is the correct syntax:
curl -X POST \
-H "Authorization: Bearer $BOT_TOKEN" \
-H "Content-Type: application/json" \
-d '{"channel": "CXXXXXXXX", "text": "Hello, world! <@UXXXXXXXX>"}' \
https://slack.com/api/chat.postMessage
Just replace the CXXXXXXXX with the channel id you want to target and the UXXXXXXXX with the user id you want to tag (optional). To learn more on how to find these values and generate a $BOT_TOKEN, read my blog: https://ducttapecode.com/blog/slack-integration/article/
You can’t change markers in swarmplot directly—overlay them with ax.scatter instead
ax = sns.swarmplot(data=df, x="value", y="letter", hue="type")
highlight = df[df["letter"].isin(["AB","AE"])]
ax.scatter(highlight["value"], highlight["letter"], marker="*", s=200, c="black")
For anyone who cares about 29/2 special handling, here is my suggestion (thx to @Gareth for the incentive):
import calendar
def add_years(d, years):
"""
Return the same calendar date (month and day) in the destination year.
In the case of leap years, if d.date is 29/2/xx and the target year (xx+years) is
not a leap year it will return 28/2.
Conversely, if d.date is 28/2/xx and the target year is a leap year, it will
return 29/2.
"""
sy, ty = d.year, d.year + years
if all([calendar.isleap(sy), calendar.isleap(ty)]):
ret = d.replace(year=d.year+years)
elif all([calendar.isleap(sy), d.month == 2, d.day == 29]):
ret = d.replace(day=d.day-1).replace(year=d.year+years)
elif all([calendar.isleap(ty), d.month == 2, d.day == 28]):
ret = d.replace(year=d.year+years).replace(day=d.day+1)
else:
ret = d.replace(year=d.year + years)
return ret
You don't need to be a service to use SetTokenInformation. In your existing uiAccess process, duplicate its token, explicitly set TokenUIAccess to true, and you should be good to go. See also https://stackoverflow.com/a/23214997/21475517 (translate to C# as needed).
short answer is 'no', there is no way in DNG to do this. your export/modify/import method works but as you said you have a synchronization problem. there is a REST API for DNG and using that you could do essentially the same thing with a script, but again there is a synchronization problem. welcome to the disappointment of using DNG after DOORS. ¯\_(ツ)_/¯
Take care because whenyou want to give permission on a topic, you must not use "arn:aws:kafka:eu-west-1:123456789:cluster/test-cluster/9f4ea0a3-75bc-4ff9-a971-73efa2ef73c9-9/topic/test-topic2" but USE "arn:aws:kafka:eu-west-1:123456789:topic/test-cluster/9f4ea0a3-75bc-4ff9-a971-73efa2ef73c9-9/topic/test-topic2"
==> change :cluster by :topic
You are most probably going to want to want to have separate package files as those two separate folders will have different tooling needs. That's not a must though. E.g. you may have a SSR React app that needs to be served. But... the way you explained it "client = React", "server = express" it sounds like you are building two separate integrated applications.
A more elaborated answer could include a way of working with workspaces where you may indeed want to have a root package.json
you have to reassign previous (and next) every iteration of the for loop.
for obj in mod do {
po = previous(obj)
no = next(obj)
you don't need the refresh.
I finally found a way to do that. Thanks to Rajeev KR answer which drive me to the solution
The hostArray string has to be converted to JSON and it can be done using with a type conversion to json https://github.com/karatelabs/karate#type-conversion
# sample of hostsArray
* def hostsArray = '[{"hostid": "1234"},{"hostid": "4567"}, {"hostid": "9865"}]'
* json hostArrayJson = hostsArray
* def myjson =
"""
{
"key1": "val1",
"params" : {
"key2": "val2",
"hosts" : '#(hostsArrayJson)'
}
"""
GCP has release a new version composer-2.14.0-airflow-2.10.5 that solves the above described dependency loop by pointing out (at least in my case, the conflict). You can analyze the release notes and verify that this version was added to improve PyPi dependency issues.
Checkout my Repository : https://github.com/Ujjwalbiswas09/Dae-Parser-Java-Android
I have build this totally from scratch specially for android using java
Bold Reports is a modern and robust alternative to the legacy Report Viewer control, offering full support for RDL and RDLC formats so you can continue using your existing SSRS reports without needing to rewrite them. It is fully compatible with all major browsers including Chrome, Firefox, Safari, and Edge, overcoming the limitations of older viewers that were tied to Internet Explorer. With a dedicated ASP.NET MVC Report Viewer control, Bold Reports integrates seamlessly into MVC applications using NuGet packages such as BoldReports.Web, BoldReports.Mvc5, and BoldReports.JavaScript. We also offer support for .NET Core, ensuring compatibility with modern .NET applications.
Unlike the traditional Report Viewer, Bold Reports is JavaScript-based and does not rely on ViewState, making it ideal for modern web development. It also provides extensive customization options, localization support, and the ability to extend functionality through events and APIs.
Bold Reports uses a Web API controller to load and show reports, which works well with today’s web development practices. This setup helps your app run faster and makes it easier to connect with other services. For detailed implementation guidance, refer to the official documentation: How to Add the Report Viewer to an ASP.NET MVC Application – Bold Reports.
How about this:
if ((typeof process !== 'undefined') &&
(process.release.name.search(/node|io.js/) !== -1)) {
console.log('this script is running on the server');
} else {
console.log('this script is running in client side');
}
The solution here ended up being a change of uC from the STM32L series to the STM32U series, meaning a 32 bit timer was available on the same pin.
This however also had issues as Tim 2 Chan 1 did not work, tried on x2 uC, so a short to the next pin and Tim 2 Chan 2 worked fine.
None of the suggested methods (above) using DMA worked, they suffered similar issues to those reported in the question.
The statement: ""There is currently no theoretical reason to use neural networks with any more than two hidden layers" was made by Heaton in his 2008 work, and reflects the theoretical perspective at that time: that multilayer perceptrons (MLPs) with more than two hidden layers had no guaranteed theoretical advantage over shallower networks. However, as is often the case in a fast-evolving field like deep learning, this claim has since been overtaken by both empirical evidence and new theoretical insights.
I'd recommend to try my lib which adds some consistency to develop FastAPI and Socketio
https://github.com/bandirom/fastapi-socketio-handler
Lib under active development, so fill free to contribute or open an issues
Analyzer `6.0` is too old, you'll need to remove the dependency pin if you want to use more recent versions of various packages
dependency_overrides:
analyzer: ">=6.0.0 <6.6.0"
thes dart pub get.
This issue has now been resolved. I raised an issue in the GitHub issue repo for googlecolab/colabtools and I tried the same code as above this morning, and it loaded fine.
I know it's been a while since you asked, but have a look at https://github.com/PetrVys/MotionPhoto2. You'll need to port it from python, but the files created are working in most viewers, and HEIC files are supported too - and Google Photos is the primary target.
form.append('audio', { uri: Platform.OS === 'ios' ? uri.replace('file://', '') : uri, name: name || 'upload-file', type: mime || 'application/octet-stream',
Use the correct MIME type if available, otherwise fallback to "application/octet-stream",This ensures the file always has a valid type during upload
For a notebook in VSCode, following will give you the notebook name. No need to import anything additional
notebook_name = os.path.basename(globals().get('__vsc_ipynb_file__', 'unknown_notebook')).replace('.ipynb','')
print(f"Notebook name: {notebook_name}")
I have solve the problem, the problem is that when r studio first start to make a directory and create a log file in .local and have not enough permission, so it would shut down. Open the r studio in terminal can see the complete error information.
If anyone happens to stumble upon this post, it's a new feature in C# 14 https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/proposals/csharp-14.0/null-conditional-assignment.
I was looking for something similar, wanting to customize the formatter. The best solution I could find was to download the Eclipse IDE itself and use the built-in Formatter Profile Editor, available under Project -> Properties, then Java Code Style -> Formatter (and perhaps -> Configure Workspace Settings)
While it is very convoluted and cumbersome, it is also a much more powerful and versatile version of the "Java Formatter Settings with Preview" available in VS Code. It offers a preview of every single setting using a fitting code example to visualize the changes. It also allows for the examples to be changed and custom code to be tested with the "View/edit raw code" option and "Custom preview contents" toggle. Once satisfied, it can be exported and the resulting file be used as the projects eclipse-formatter.xml.
While it's not a 1:1 description of every setting and their respective byte combinations, I feel that it fullfills that role very well.
Currently, nested objects are not filterable nor vectorized a per our docs:
Currently,
objectandobject[]datatype properties are not indexed and not vectorized.Future plans include the ability to index nested properties, for example to allow for filtering on nested properties and vectorization options.
Our team is currently working on that feature, and you can keep track of it here:
https://github.com/weaviate/weaviate/issues/3694
Thanks!
To change the sync function on an App Endpoint, please use the management API https://docs.couchbase.com/cloud/management-api-reference/index.html#tag/App-Endpoints/operation/putAccessFunction
Only /_roles, /_users and /_session are allowed through through the admin port 4985 in the connect tab on App Endpoints.
For the public port 4984, these are the supported endpoints https://docs.couchbase.com/cloud/app-services/references/rest_api_public.html
I've done some research on this topic and i think it's only possible on the tpyescript platform and not regular js anymore with the upgrade. It's also stated in the upgraded note to switch. "If you’re using propTypes, we recommend migrating to TypeScript or another type-checking solution." hope this helps
All the Emacs Verilog-mode like indentation/vertical alignment needs are satisfied in VSCode with DVT through https://eda.amiq.com/documentation/vscode/sv/toc/code-formatting/indentation.html.
I'll post an update once I figure it out. That way, if anyone else has my problem, hopefully they will find the solution.
This was fixed for me after I upgraded my flutter version
Another pitfall I can see is that many articles about OAuth authorization on client side don't talk about the client's Access Token validity verification at resource/api server side. I've found some talks/doc about "introspection" endpoint, but they are rare.
It's why have asked here this question in context of Laravel Socialite.
Can users specify rust things be installed on D-drive and not C-drive (C:\users\xxx\.rustup)?
You can’t directly create DataWedge profiles via Flutter APIs because Zebra doesn’t expose a Flutter plugin for that yet. The trick is to use the DataWedge intent interface.
From Flutter, use MethodChannel to send an Android intent.
Broadcast to DataWedge with the com.symbol.datawedge.api.CREATE_PROFILE action.
Once the profile exists, push configuration with SET_CONFIG intents (scanner input, keystroke/output plugin, intent delivery to your Flutter activity).
Keep the profile name consistent with your app package so you can reuse it across installs.
It’s a bit boilerplate-heavy, but once the intent channel is set up, you can manage profiles from Flutter without touching native code too much.
There seems to be some sort of issue with google/apiclient and a VM shared folder. I was able to install it fine via composer in a non-shared folder in the Ubuntu VM in less than a second but it always failed when trying to install it in the shared folder.
Then, even after it was installed I had trouble moving the vendor folder into a the VM shared folder.