If for some reason you don't want to use `str.rfind` and are only trying to find 1 character, you can look for all indices that fit your criteria and take the maximum like so
word = "banana"
a = "a"
last_index = max(i for (i, c) in enumerate(word) if c == a)
print(last_index)
To help convert a spreadsheet (e.g., Excel .xlsx
or .csv
file) to a PDF, please upload the file you'd like to convert. Once uploaded, I’ll handle the conversion and provide you with the PDF version.
There is also an alternative solution written in the docs,
https://docs.pydantic.dev/latest/concepts/pydantic_settings/#disabling-json-parsing
For the above example, this could means,
# Note - showing only new items that need to be imported
from typing import Annotated
from pydantic_settings import NoDecode
class JobSettings(BaseSettings):
wp_generate_funnel_box: bool = Field(True)
wp_funnel_box_dims_mm: Annotated[Tuple[int, int, int], NoDecode] = Field((380, 90, 380))
@field_validator('wp_funnel_box_dims_mm', mode='before')
@classmethod
def parse_int_tuple(cls, v) -> tuple[int, int, int]:
output = tuple(int(x.strip()) for x in v.split(','))
assert len(output) == 3
return output
model_config = {
"env_file": ".env",
"env_file_encoding": "utf-8",
"extra": "ignore",
}
Hello @Matthijs van Zetten
You are using your app on https://0.0.0.0:5001
and it has target Port on 5001
in your Container App config, which looks good, but the error upstream connect error or disconnect/reset before headers
usually means the ingress cannot talk to with your app.
ACA ingress only supports HTTP over TCP, it terminates TLS at the ingress not inside your container. So when your app listens for HTTPS, the ingress can't speak HTTPS to your container, it expects plain HTTP on the target Port. That’s why it resets the connection.
To fix please follow the below steps:
You can change app to listen on plain HTTP inside the container (e.g., http://0.0.0.0:5000
)
Also you have to Set the
ASPNETCORE_URLS=http://0.0.0.0:5000
in your Dockerfile, Then Update your Docker EXPOSE 5000
and set targetPort: 5000
in the Container App.
After that you can redeploy and test it using your custom domain. Azure ingress will handle HTTPS externally, and your app just needs to serve HTTP internally.
Let me know if you want to keep HTTPS inside the container, but you would need to bypass ingress and expose the container differently, which is not recommended.
This is only feasible in specific scenarios, such as when using TCP-based ingress instead of HTTP, utilizing Azure Container Apps jobs with private networking or direct IP routing, or building an internal mesh where containers communicate via mTLS though often still HTTP at the ingress. For most use cases like public web APIs and apps, it is best to let ACA manage HTTPS at ingress.
change the frame handling , it should work , can try changing scalefactors , minneighors as well
I'd like to add the benifit or `<inheritdoc/>` in Rider IDE when you are editing the corresponding file. You can choose to render doc comments, so it looks like this:
Toggle rendered view via toolitp or shortcut
See the documentation without needing to over it
When you don't use doc `<inheritdoc/>`, you will see the documentation only on hover:
Please check my answer here:
Firebase functions V2: ; SyntaxError: Cannot use import statement outside a module at wrapSafe
I don't know exactly what is your use case, but normally facts tables are multidimensional (e.g. Sales per geography/time/product dimension). If you have one single dimension, then all fact tables have the same cardinality and therefore you could aggregate them, but in this case that's more a question related to your specific DBMS and how it performs the retrieval of data. Having 10 different tables could make sense in some scenarios if each fact table belongs to a specific subject area and contains related facts that are rarely combined with other facts in the other fact tables. In this case you're optimizing the read operations as the entire row is relevant, rather than combining in the same row facts that are unlikely to be queried together. But it also depends how your DBMS retrieves the data. Some years ago I split a table with 400 columns and several million rows in Teradata 10, before they had built-in horizontal partitioning, because Teradata 10 was reading entire rows filling blocks of a specific size and then sending selecting the specific columns chosen. By splitting the table horizontally in several subject-area tables, I was improving the efficiency of the block reading, as practically no columns were discarded so the entire memory blocks were relevant.
I've played this around for you.
First of all, GD (and even Imagick in its default operations) doesn't support perspective transformations - which is what you need to make the banner look like it's part of a 3D surface.
You cannot realistically get perspective distortion with GD, unless you manually do it with polygon fills which I would skip for sure, too complex...
However with Imagick I used distortImage()
as in the comment Chris Haas mentioned with Imagick::DISTORTION_PERSPECTIVE.
I've created this code:
function mergeWithPerspective(string $basePath, string $bannerPath, string $outputPath): void {
$imagick = new Imagick(realpath($basePath));
$banner = new Imagick(realpath($bannerPath));
// Resize banner to fixed dimensions for consistent perspective mapping
$banner->resizeImage(230, 300, Imagick::FILTER_LANCZOS, 1);
// Map corners of the banner to positions on the truck's black panel
$controlPoints = [
0, 0, 496, 145,
230, 0, 715, 163,
0, 300, 495, 407,
230, 300, 712, 375
];
$banner->setImageVirtualPixelMethod(Imagick::VIRTUALPIXELMETHOD_TRANSPARENT);
$banner->distortImage(Imagick::DISTORTION_PERSPECTIVE, $controlPoints, true);
// Composite the distorted banner onto the truck image at a specific offset
$imagick->compositeImage($banner, Imagick::COMPOSITE_OVER, 507, 110);
$imagick->writeImage($outputPath);
$imagick->clear();
}
I applied a perspective distortion on the banner and tried to placed it realistically.
The downside of this as you can see that you have to define a starting X and Y coordinate with compositeImage() as well as creating the necessary control points with $controlPoints.
I've created a github repository for this:
https://github.com/marktaborosi/stackoverflow-79562669
The source images are in:
src/source-images
The processed image is in:
src/processed
Hope this helps!
You can use json validators and beautifier tool which is available online for ex- https://www.jsonvalidators.org/
Answer (April 30, 2025):
I had the same issue in Visual Studio Community 2022 where the file (e.g., UnitTest1.cs
) was open, but the content wasn't visible in the editor—even though the file existed and was part of the project.
Restarted Visual Studio
Rebooted my system
Unfortunately, neither of those worked.
I updated Visual Studio to the latest version, and after the update, the issue was resolved.
If you're facing a similar problem, I recommend checking for updates from:
Help > Check for Updates
try to set (verify ) your GPG pair (private and public )on your local machine, using "rsa agent".
I got the solution: you can use the csv package and upload your sheet to the Firebase database.
The simple step is you need to create just CSV format of your sheet. if you any help let me know this is my linkdin profile : https://www.linkedin.com/in/janvi-mangukiya-0b9233267/
It worked after I upgraded the SDK to newer version
Settings.embed_model = HuggingFaceEmbedding(model_name="sentence-transformers/all-MiniLM-L6-v2")
check this
Little late to the party. Google Navigation SDK allows you to get turn by turn data and manoeuvre events and data. In short here is what I did:
Create a mobile app which uses Google Navigation SDK where the user initiates the navigation(like you do with Google maps or Waze).
Inside the app listen to the manoeuver events(Turn right in 30meters). Broadcast this event to the ESP32 device over BLE.
Detailed article here: https://medium.com/@dhruv-pandey93/turn-by-turn-navigation-on-small-displays-9ea171474095
Change this:
cv2.rectangle(gray, ...)
To:
cv2.rectangle(frame, ....)
Have you already tried to configure Diagnostic settings on the service bus?
With this Logs you should be able to get all information you can get from within the Service Bus by default. If you forward your messages to a Log Analytics Workspace you can then query all logs an aggregate them or correlate different logs together with KQL.
More information can be found here:
Google Navigation SDK allows you to get turn by turn data and manoeuvre events and data. In short here is what I did:
Create a mobile app which uses Google Navigation SDK where the user initiates the navigation(like you do with Google maps or Waze).
Inside the app listen to the manoeuver events(Turn right in 30meters). Broadcast this event to the ESP32 device over BLE.
Detailed article here: https://medium.com/@dhruv-pandey93/turn-by-turn-navigation-on-small-displays-9ea171474095
You should use like this:
TextStyle(
fontWeight: FontWeight.w700,
fontFamily: 'Inter',
fontSize: 18,
color: Colors.black,
package: 'your_package_name',
);
There are online services that will let you test the server while specifying TLS versions -- meaning you can say "connect to this server using only TLSv1.1".
CheckTLS.com tests TLS on mail servers but you can force it to test HTTPS (443). From their //email/testTo (TestReceiver) test, set the Target to the host you want to test, and under Advanced Options set the mx port to 443, turn on Direct TLS, and put TLSv1_1 or TLSv1_2 in the TLS Version field.
Also works to move providers from module to component, where recaptcha uses
Having hit this problem (three levels deep), I reviewed the answers here, and my logic, and found:
I'm looping to determine if a user has an active place of employment. That's meaningfully a function with a bool result. Being a dedicated proponent of multiple exit points from functions, that's what I did. It also reduces the line count in the calling function, so I consider the issue settled properly.
Thanks, Michael!
User Assigned Managed Identity not showing as assigned in Azure Data Factory despite correct configuration
Troubleshooting steps:
Verify ADF Linked Service:
Check if the Authentication type is set to "User-assigned Managed Identity" while creating Linked Service.
Use the UAMI in a Linked Service to activate it and make it used by ADF
Simply assigning the UAMI to ADF does not mean ADF will use it automatically to authenticate to resources.
Sometimes the Portal UI doesn't reflect it correctly due to caching or latency.
This might be a UI rendering issue because Portal UI sometime caches old assignments even after they succeed. Try refreshing your browser, it will reflect after some time under "Assigned to Data Factory" option in Linked Service.
You can also Confirm that UAMI is Working or not by Testing Access:
So, In ADF I tried referencing a secret via a Key Vault linked service that uses the UAMI. (after assigning UAMI the "Key Vault Secrets User" role)
Getting the Secret successfully confirms that it's working end-to-end.
It seems simple - didn't test with all plugins, but it seems like a good solution
I'm also facing the same issue in nodejs
I have created a get route as well and return the 200 response. then its working fine.
res.writeHead(200, {'content-type': 'text/plain'}
hey you need this scops here to post with the api
user.info.basic,video.publish,video.upload
and you need to query the info first
https://developers.tiktok.com/doc/content-posting-api-get-started?enter_method=left_navigation
TensorFlow is often a version or two behind in supporting the latest Python versions. As of now, TensorFlow 2.18 supports Python 3.11. I would suggest downgrading python to 3.11.
Generally, when upgrading from .NET Framework to .NET Core you first need to upgrade to .NET Standard (1.x
-> 2.x
, or directly to 2.1
whichever is least "painful"), and then after that upgrade to whatever version of .NET Core you want to target. Useful links:
An effective way to solve your problem would be to create dummy nodes for your ultra narrow aisles. A set of normal nodes for the points in ultra narrow aisles and a set of dummy nodes for those points. The entrance to an aisle should be either a dummy node or normal node, depending on the side of the aisle you are entering from. If you now set the distance between dummy nodes and normal nodes in an aisle to a very large number (e.g. infinite), you will always exit trough the side you came in, as that path is always shorter.
Note: for heuristic approaches (which I assume you are using) this may have a neglible effect on your results or solving time. For exact solutions (using linear programms) this increases the problem size by the amount of nodes in narrow aisles, and may exponentially increasse the solving time for this problem.
I wrote a utility called "Bits" which does exactly what you want. It installs an Explorer right-click menu that when selected analyses the file and tells you if it’s 32 or 64-bit.
It’s around 5.5K in size, has no dependencies beyond what is already present on the system and is a single file. The only installation required is to register a right-click menu with Explorer. The installer/uninstaller is embedded in the EXE.
Once installed you simply right-click on the file you want to check and choose, “32 or 64-bit?” from the menu.
You can get it or view the source here:
Running the following command as root worked for me only temporary. As the real issue was with SELinux.
pm2 update
When I checked the systemd entry of pm2, I could see that the PID file could not be opened due to SELinux. So I had to create a new rule to allow SELinux to allow systemd to check if the PID file exists.
sudo cat /var/log/audit/audit.log | grep systemd | grep pm2 | audit2allow -M systemdpm2
Then I applied the new rule:
sudo semodule -i systemdpm2.pp
For me this summary itself doesn't show up , I've configured the Jmeter properties , but still facing same issue. Can someone please help ?
the same problem i was trying for 6 days
if you find the solution please tell me
thank you
As the link in Rogier van het Schip's answer (https://stackoverflow.com/a/41162452/30412497) is broken, but I do not currently have enough reputation to comment, here is the content being linked to in the original answer.
https://jasonkarns.wordpress.com/2011/11/15/subdirectory-checkouts-with-git-sparse-checkout/
Thanks to Ihdina for the suggestion, I will improve this code in the future.
The problem has been solved, and this i2c_wait_ack is suitable for simulating i2c (sda is push-pull output pin), that is, i2c communication with only one master and slave.
The problem lies in the i2c_wait_ack function in software-i2c because the output DR Is always 1 because the host releases the bus (the sda pin is an open-drain output). This causes a timeout, outputs a stop signal, and a NACK condition occurs.
That is, at the beginning of i2c_wait_ack, the sda pin is configured as the input pin, and when the ack signal is obtained, the sda is set as the open-drain output pin.
A simple solution.
@echo off
:: library.bat
setlocal
:: the name of the function that will be called
set _function_name_=%1
:: the arguments of the function that will be called
set _function_args_=
set _count_=0
:: remove the first argument passed to the script (function name)
for %%f in (%*) do (
set /a _count_+=1
if !_count_! GTR 1 set _function_args_=!_function_args_! %%f
)
echo Script args: %*
call %_function_name_% %_function_args_%
endlocal
exit /B
:function1
echo function name: %_function_name_%
echo function args: %*
exit /B
:function2
echo function name: %_function_name_%
echo function args: %*
exit /B
:: examples of use
:: call library.bat :function1 1 2 3
:: call library.bat :function2 4 5 6
This is very very old, but still an issue that can occur. I feel however that the answer from @lajos Arpad did not really address the issue, or I did not understand your question.
How I read it is your API talks to an external database that is created by a webshop framework. You want to support a newer version of that framework, which uses a slightly different database model.
Now the problem is that when you update your DbContect (model) to the new framework, it will be incompatible with the older framework.
Your reply to @Lajos Arpad says you intend to just focus on the new framework and keep a version of the source from the older framework code.
BUT that would mean you can't easily fix issues that are present in both the older and the newer framework version without having to fix them in both source trees.
@Pedro Luz states it is not possible with a DbContext, and a solution will have to be handcrafted.
We don't use EF at present and have our own POCO classes an database context where we can adjust what is send to the database based on a database version flag that the context knows about.
Usually we only support a few versions, and eventually we can clean out specific version switches after that version is no longer in circulation.
For anybody reading this, is there (in 2025) some way to have an EF Context and Model that has fields that will be send to, or ignored by, the database at runtime so you can support multiple active versions of a database model with the same source-code. We regularly use this to put new features in production code, but no customer can see it since their database is still on a previous version. Then when it becomes time to release we upgrade the database and voila the feature lights up.
to mention a team what you use?
This solution worked well for me!
https://github.com/Kaligula1987/JS-URL-Endpoint-Harvester
JS-URL-Endpoint-Harvester
"A Python tool to extract, validate, and classify URLs from JavaScript files." "Effortlessly scan JavaScript files to find and categorize hidden URLs—ideal for endpoint discovery!"
A Python script to extract, validate, and classify URLs from JavaScript files.
The crash happens because ListFiltered is null when getCount() is called. Fix it by changing getCount()
to:
return listFiltered != null ? listFiltered.size() : 0;
Also, move filterResults.count and filterResults.values outside the loop in performFiltering() to avoid inconsistent behavior.
I solved this for myself by installing the python headers for my version.
At least this fixes it for the Psycopg2 package.
Use this for server database connection
DB_CONNECTION=mysql
DB_HOST=hostinger server ip
DB_PORT=3306
DB_DATABASE= your database name here
DB_USERNAME= your database username here
DB_PASSWORD= your database password
here
You might be missing project creator on target organization. The following checklist should help
https://cloud.google.com/resource-manager/docs/project-migration-checklist
The problem was that there was a custom struct with the name Context, which conflicted with the Context type required by UIViewControllerRepresentable methods. Changing the structure name solved the problem.
For those interested, here is a sample program, and the final custom function. (save as custom-knit.r)
custom_knit <- (function(input, ...) {
# Initial version from multiple sites/contributors:
# https://stackoverflow.com/questions/79595316/knit-once-save-twice
# https://stackoverflow.com/questions/66620582/customize-the-knit-button-with-source
# parameters for rendering: set to none to ignore
# suffix:
# date (rmd name + YYYYMMDD.html,
# datetime (rmd name + YYYYMMDD-YYMMSS.html)
# none (just rmd name) ]
#
# readme:
# path/filename (e.g., /Production/_Readme-StatAreas2022)
# filename (e.g., _Readme-IndustryHazards)
# none (no additional simply named document created)
# read Rmd yaml into R object
yaml <- rmarkdown::yaml_front_matter(input)
# Rmd file name without path or extension
rmd_basename <- tools::file_path_sans_ext(basename(input))
# Suffix creation for complex name
if (yaml$params$suffix=="date") {
complex_name <- paste0(rmd_basename, '-', format(Sys.time(), "%Y%m%d"), '.html')
} else if (yaml$params$suffix=="datetime") {
complex_name <- paste0(rmd_basename, '-', format(Sys.time(), "%Y%m%d-%H%M%S"), '.html')
} else {
complex_name <- paste0(rmd_basename, '.html')
}
# render Rmd file and record absolute path to output file
complex_path <- rmarkdown::render(
input,
output_file = complex_name,
output_dir = "Output",
envir = globalenv()
)
# Process additional copy if requested
simple_path <- yaml$params$simple
# perform copy
if (yaml$params$simple!="none") {
simple_path <- paste0(simple_path,'.html')
file.copy(complex_path, simple_path, overwrite=TRUE)
}
})
Here is the YAML section
---
title: "RenderExample - Custom knit"
subtitle: "see params"
author: "Mark Friedman"
date: "`r format(Sys.time(), '%d %B, %Y %H:%M')`"
output: html_document
params:
suffix: datetime # date, datetime, none
simple: Production/_Readme-Stat2022 # path+base, base only, none
knit: (function(input, ...) {
source("custom-knit.R");
custom_knit(input, ...)
})
---
https://github.com/Kaligula1987/JS-URL-Endpoint-Harvester
JS-URL-Endpoint-Harvester
"A Python tool to extract, validate, and classify URLs from JavaScript files." "Effortlessly scan JavaScript files to find and categorize hidden URLs—ideal for endpoint discovery!"
A Python script to extract, validate, and classify URLs from JavaScript files.
The problem resolved by adding worker.format: 'es'
in vite.config.js
. Unfortunately, it has been hard to find a solution because the error texts are not informative enough.
I found the solution to this problem. First, do as in the video https://www.youtube.com/watch?v=QMAgD9SS5_E.
You only need to make one change, which is to set the Mlave Mode to Reset Mode.
In case anyone is looking for a solution to the ERROR_APPPOOL_VERSION_MISMATCH error when deploying a web job to Azure App Service, adding this line to the PropertyGroup section of the csproj file helps:
<IgnoreDeployManagedRuntimeVersion>True</IgnoreDeployManagedRuntimeVersion>
It’s not directly possible to convert Swift and C# code into XAML because they belong to different ecosystems. Swift is primarily used for iOS development, while C# is typically used with frameworks like .NET or Xamarin for cross-platform development. XAML (eXtensible Application Markup Language) is a declarative markup language used for designing UI in .NET-based frameworks like WPF, UWP, and Xamarin.Forms.
If your goal is to port your iOS app to a cross-platform solution that uses XAML (e.g., Xamarin.Forms), you would need to:
Rebuild the UI using XAML in Xamarin.Forms (which supports both iOS and Android).
Translate the logic written in Swift to C# if necessary.
Ensure that platform-specific features are handled using dependency services or platform-specific code.
It’s not a direct "conversion," but a reimplementation for a new framework and platform.
If you're just looking to create a cross-platform version of your app, you might consider Xamarin.Forms, which uses C# for the logic and XAML for the UI. This would allow you to write once and deploy on multiple platforms (iOS, Android, etc.).
For more tailored solutions and guidance, Jaz Infotech can provide support in mobile app development and cross-platform technologies.
try using "readBody" to deal with it
const body = await readBody(event)
My phone is broadcasting, the Fn screen repair guy hooked me up with chromium and a hidden admin. I should give the pos phone back to ATT since it's open source and not really mine. The repair shop gonna get caught, I'm sure I'm not the first or the last one they decided to steal from or clone. PayPal declined an unknown 298 attempt and capital one caught an attempted unauthorized charge. Losers who can't make it on their own. Like a leach or parasite
Don't set DISPLAY in the Dockerfile — instead, pass it at runtime to ensure it matches your host system.
and this will help you
https://github.com/Kinsella-Consulting/docker-java-swing?tab=readme-ov-file
https://learnwell.medium.com/how-to-dockerize-a-java-gui-application-bce560abf62a
For me I did the following steps, though it didn't solve the issue, but I could see the content of the data via terminal and in case I need to download the file can do that as well.
Step 1 : Check if adb exist in this path
ls ~/Library/Android/sdk/platform-tools/
Step 2: In case it does Add adb
to Your PATH
nano ~/.zshrc
add the path to this via : export PATH=$PATH:$HOME/Library/Android/sdk/platform-tools
reload the shell config : source ~/.zshrc
Step3 : run adb devices
to check your mobile connected
Step 4: run a similar command to extract a particular file from the data
Example :
adb shell run-as com.process-xyz.demoapp cat files/Logs_Generated_kv_Linux.txt > output.txt
I mocked a role and run the playbook, both roles tasks run successfully as they should. If second role doesn't run in your environment I bet it's because first one failed at some point. So playbook is correct, something wrong with the role.
I ran into the same problem on a different Unity Version. I used Android Studio to install only the SDK for Android 13 to 15.
For me, deleting and reinstalling the SDKs and also installing older versions solved the problem.
I ended up finding the solution by using test and testContext.
idNumber: Yup.object({
number: Yup.number(),
label: Yup.string()
})
.test("", "ID is required", (value, testContext) => {
let unknown = testContext.parent.name.isNameUnknown
if (!unknown) {
testContext.schema.fields.number.required()
testContext.schema.fields.label.required()
}
return unknown || (!unknown && value.number && value.label)
})
could you share, how did you end up resolving this? i am having same problem 7 years later
The original post was missing the r, velocity magnitude in the quiver defintion, .quiver(x,y,u,v,r).
You an also use online tools like https://www.jsonvalidators.org/, or any other tools they can reduce efforts
Is it necessary to carry out a Point-in-Time Recovery for both an original Azure database server and its read-only complement?
No, it is not necessary to carry out a Point-in-Time Recovery (PITR) for both the original Azure database server and its read-only complement. The read-only replica is a replication of the primary database server, typically used for load balancing read workloads. PITR is only supported on the primary server, as it relies on transaction log backups and full backups maintained for that server. Read-only replicas cannot be restored independently using PITR. You must restore the original server and then recreate any read replicas from the newly restored server.
If you're planning PITR due to data loss, corruption, or rollback needs, perform it on the original server. The read-only replicas are derived from it and will be invalid after PITR unless recreated.
In the event of a catastrophe, you only need to perform a PITR on Server A (the Azure Database for PostgreSQL Flexible Server). Server B (the read-only replica) does not support independent PITR and does not maintain its own backups. After restoring Server A, you can recreate Server B as a read replica from the restored server if needed. Restoring both is unnecessary and would incur extra cost.
For more information on PITR in Azure Database for PostgreSQL Flexible Server, refer to this article.
CORS Inspector
A Python script to inspect and test Cross-Origin Resource Sharing (CORS) headers for security vulnerabilities. The tool sends HTTP and OPTIONS requests to a target server and analyzes the server's response to check for common misconfigurations.
It is supported now, however its still in preview
https://devblogs.microsoft.com/cosmosdb/how-to-change-your-partition-key/
Ok i got it C3roe just had to add absolute h-full to image tag. Is there any way without relative absolute class.
<script src="https://cdn.tailwindcss.com"></script>
<div class="flex flex-col h-screen gap-2 p-2">
<div class="flex gap-1 bg-orange-400 grow">
<div class="relative w-3/5 flex justify-center bg-white">
<img src="https://picsum.photos/800/1000" class="absolute h-full rounded-lg object-fit" />
</div>
<div class="w-2/5 flex flex-col rounded-lg border-2 border-slate-200">
This is second child of the first child of root. The parent is set to grow and it should not grow beyond red box.
</div>
</div>
<div
class="flex flex-col p-1 bg-red-500 text-white">
This is second child of the parent div which stays at bottom.
</div>
</div>
Yes they will ref. You can also use this example script from the Newman repo to setup a node script to run them in parallel.
To automatically post job listings from a form to a public page in WordPress:
Use a Form Plugin: Install WPForms, Gravity Forms, or Formidable Forms with post submission support.
Create a Custom Post Type (Optional): Use CPT UI or code to create a job_listing
type.
Map Form Fields to Post Fields: Set the form to create a post (or custom post) on submission.
Display Listings on Frontend: Use a shortcode or WP Query loop on a page to show job posts.
Style & Manage Access: Customize layout with Elementor or blocks; restrict form use if needed.
In general, if you just want to rotate the tooltips text, just bind the tooltip using a new div for the text:
marker.bindTooltip(
`<div>${text}</div>`,
{
permanent: true,
direction: 'center',
className: "markerText"
}
);
and when rotating the marker, just rotate the text in the div:
marker.setRotationAngle(newAngle);
const tooltip = marker.getTooltip();
if (tooltip) {
tooltip.setContent(`<div style="transform: rotate(${newAngle}deg); transform-origin: center center;">${text}</div>`);
}
When I have pods issues, I often do a full clean to have a fresh install :
flutter clean
flutter pub cache clean
rm -rf Pods
rm -rf .symlinks
rm -rf Flutter/Flutter.podspec
rm Podfile.lock
rm -rf build
rm -rf ~/Library/Developer/Xcode/DerivedData
flutter pub get
cd ios
pod repo update
pod install
cd ..
flutter build ipa --release
Maybe that can help you.
Warning : 'flutter pub cache clean' => it cleans all the pubs you have in cache, for all projects. You'll have to run 'flutter pub get' in every single project you want to open.
It is possibel mention teams with api to Azure devops?
I want add comment with API but i want mention team and i only can add text
This did not work for me in version 5.8.0 either.
So I had a look at the available XPath functions, since SoapUI is relying on a library for this. Here is what is working:
starts-with(//geonames/timezone/time, "2012-07-25")
The expected result field should contain: true
See more functions here: XPath functions
It will work with the following chain:
with() adds eager loading with constraints (like selecting specific columns from the relation).
skip(10) tells the query to offset the first 10 records.
get() executes the query.
$links = SomeModel::with('method:column1,column2')->skip(10)->get();
I've analyzed your Android code, and I see the issue with your variables resetting when radio buttons are selected. Let me explain the problem and solution.
The issue occurs because you're initializing idPregunta
and idRespuesta
as regular variables inside your composable function. Since composable functions can be recomposed (re-executed) whenever state changes - like when selecting a radio button - these variables get reset to their initial values each time.
// These variables are being reinitialized on every recomposition
var idPregunta = 1
var idRespuesta = 1
As you correctly identified in your edit, you need to use remember { mutableStateOf() }
for these variables to preserve their values across recompositions. This ensures your ID values persist when the UI updates.
If you want to know what I use in such a situation:
<div *ngIf="isLoggedIn">
<h1>Welcome User</h1>
.....
</div>
<div *ngIf="isForgotPW">
<h1>Forgot Password</h1>
.....
</div>
In your .ts file you can define and change values easily.
isLoggedIn = false;
isForgotPW = true;
My Visual Studio console application build was failing without showing any errors in the output console. Even "Clean Solution" would fail silently, despite setting MSBuild verbosity to "Detailed".
The application was creating directories and files with names that, combined with the already deep project path, exceeded Windows' MAX_PATH limit (260 characters).
Unload the project (right-click → "Unload Project")
Reload the project to identify problematic files
Shorten generated file/directory names in code
Windows has a default 260-character path limit
MSBuild often fails silently when encountering this limit
Despite removing and re-adding the package dependency, the issue persisted. However, restarting Xcode resolved the errors effectively.
XCode version 16.1
I have figured it out. networkService.GetDetailsById(networkId)
filtered the networkUser, but did was implemented like this: network.NetworkUsers = network.NetworkUsers.Where(x => x.UserProxyId == currentUser.Id).ToList();
which overwrites the list and EF Core thinks i want to delete the rest.
Oops.
Try changing the path pattern to *.html (without the forward slash). Then set the Cache policy name to Managed-CachingDisabled and it should work.
Ensure that dir `D:\htdocs\hack\storage\framework/sessions` exist and writable. If not you can create using `$ mkdir D:\htdocs\hack\storage\framework/sessions`. And then run `file_put_contents()`
Thank you all for the advice! Here's the code that ended up working perfectly:
function removeBlockedFromVotingPage() {
document.querySelectorAll('td').forEach(td => {
const tr = td.closest('tr');
if (!tr) return;
const div = td.querySelector('div');
const descriptor = safeText(div);
const text = safeText(td).replace(/\u00A0/g, '');
if (!div && text === '') {
tr.remove();
console.log('[RYM Filter] Removed empty/downvoted row');
} else if (div && isBlocked(descriptor)) {
const prev = tr.previousElementSibling;
const next = tr.nextElementSibling;
if (prev?.matches('div.descriptora') && isBlank(prev)) prev.remove();
if (next?.matches('div.descriptora') && isBlank(next)) next.remove();
tr.remove();
console.log(`[RYM Filter] Removed descriptor: "${descriptor}"`);
}
});
// Remove leftover green separator blocks
document.querySelectorAll('div.descriptora, div.descriptord').forEach(div => {
if (isBlank(div)) {
div.remove();
console.log('[RYM Filter] Removed leftover descriptor block');
}
});
}
add : wix-site-id: <siteId> To the Header of the request
(the site id can be found in the app.config.json file)
Create-react-app
is no longer maintained, personally for single SPA apps i use Vite
You need to learn about cache coherence before try to understand cache coherence protocols. It decides by looking the coherence state of the line, even sometimes decides with an algorithm which is hardcoded.
android.credentials.GetCredentialException.TYPE_NO_CREDENTIAL, msg = During get sign-in intent, failure response from one tap: 16: [28434] Cannot find an eligible account.}
Never mind, it was my own fault. I had added the
display: none to the class v-input__details.
Because this section of code was taking up space under inputs and causing me alignment issues. I will have to think of a better solution to fix the alignment issues now.
I ran into the same issue when trying to create a pipeline for a private repository under a GitHub organization. The error I received was:
"Unable to configure a service on the selected GitHub repository. This is likely caused by not having the necessary permission to manage hooks for the selected repository."
In our case, the issue was due to missing GitHub App permissions. We resolved it by going to the GitHub organization settings, adding Azure DevOps under GitHub Apps, and explicitly granting access to the repositories we wanted to use in Azure DevOps.
Important: Only a GitHub organization admin or a user with admin access to the repository can make these permission changes. If you don’t have that level of access, you’ll need to request it or ask someone with the necessary rights to configure it for you.
Once the permissions were set correctly, Azure DevOps was able to configure the required webhooks, and the pipeline setup worked smoothly.
Hope this helps someone facing the same issue!
I get some idea from this post: https://stackoverflow.com/a/65326693/22397626
So you first install this npm package: https://www.npmjs.com/package/wavefile?activeTab=readme and then use below code:
const wavefile = require('wavefile');
let audio = await this.openai.audio.speech.create({
model: "gpt-4o-mini-tts",
voice: "ash",
input: 'speech',
response_format: "wav",
});
let audioBuffer = Buffer.from(await mp3.arrayBuffer());
let wav = new wavefile.WaveFile(audioBuffer)
wav.toBitDepth('8')
wav.toSampleRate(8000)
wav.toMuLaw()
let payload = Buffer.from(wav.data.samples).toString('base64');
Sometimes you need to just restart your terminal or editor (like visual studio code), so it will know that npm and node exist, before it will work.
That's what I needed to do to get things working.
Layanan call center lainnya yang dimiliki oleh Bank BTN adalah melalui WhatsApp (+6287766656123). Bank BTN juga memiliki email yang dapat dihubungi melalui ...
Currently the default HTTP version for HttpClient
is 1.1
In both examples you are using version 1.1
.
The resource you are trying to fetch has response version of HTTP 2.0, try sending the request with
var request = new HttpRequestMessage(HttpMethod.Get, "http://131.189.89.86:11920/SetupWizard.aspx/yNMgLvRtUj")
{
Version = new Version(2, 0) // change the version
};
HttpResponseMessage response = await client.SendAsync(request);
Here is the place where the exception occurs:
There are two potential root causes for this issue, either there is another piece of software using jgroups (standalone/WildFly/JBoss EAP/Infinispan/etc) on the network using a different version of JGroups or something completely unrelated is using the same multicast IP/port. The former typically happens when users use multicast for discovery with no authentication nor encryption but use the same UDP multicast address.
Since you are using static discovery, finding the root cause should be easier. You should inspect all running Java processes for potential conflicting version but most likely, you need to examine what else is sending packets on this address (e.g. using Wireshark).
I tried testing it, the server is not responding now, could you recheck if your server is running and the port is open ?
Pretty sure it's a server issue.
What i tried :
HttpClient to use HTTP/1.1
HttpWebRequest
tried a socket level approach
forced headers
Adding some more visual context to these answers as I struggled myself to follow on the above, even though those helped me find out Branches Again. So, in the Source control you may need to look at the end of the pane for GITLENS. There are two ways of doing this:
GITLENS
collapsible section in Source Control
Extension and Right-Click
to get this:
2. Or Click three dots and play with Group/Detach Views
If none of the above works for you and you are on Windows , using Git bash:
Didn't find mirror feature in this it's doing for front camera, but I don't want to flip
productFlavors {
dev {
dimension "flavor-type"
applicationId "ais.xxxx.app.dev"
resValue "string", "app_name", "xxxx DEV"
}
qa {
dimension "flavor-type"
applicationId "xxxx"
resValue "string", "app_name", "xxxx QA"
}
prod {
dimension "flavor-type"
applicationId "ais.xxxx.app"
resValue "string", "app_name", "xxxx"
}
}
add the flavor path cofig
how to convert the digital 2- to 2
Solution for me: Add the glue to the run configuration!
Firstly I suppose you are using @mui/material
v6.
Grid2
is different from Grid
, Grid2
has no item
or xs
props,
It's like:
<Grid2
size={{ xs:7, md: 4, lg: 1 }}
>
</Grid2>
https://v6.mui.com/material-ui/react-grid2
Secondly Grid2
is deleted from @mui/material
v7, they replaced old Grid
with Grid2
and renamed it to Grid
.
https://v7.mui.com/material-ui/react-grid/